• the Daily hi thread just say hi :)
  • All contentious threads including politics, religion, crime, immigration, laws, elections etc are banned & will be removed. There is still a Gun Related Politics section for relevant topics.

Anyone really worried about LLM AI?

As for your raindrop question, he's Claude's answer including the math equations to work it out.

Interesting. None of the training sessions I've done on prompt engineering have advocated that style of justification for the task :D
 
not worried at all ,ill be long dead before all that ill be back cobblers happens
 
Interesting. None of the training sessions I've done on prompt engineering have advocated that style of justification for the task :D
You have to give it the variables or essentially you are just asking how big is a piece of string.
 
You have to give it the variables or essentially you are just asking how big is a piece of string.
Aww, you changed the reason for asking the question :D
Yes, it's really just spec writing when it comes down to it.
 
This is so daft honestly, no offense.

You used the wrong tool (a LLM) for the wrong job (a raindrop question) and declared that when it returned ******** therefore all AI is wrong and it's not that big of a deal actually.

If this was 1990 I'm sure you would be telling everyone the internet is just a phase.

As for your raindrop question, he's Claude's answer including the math equations to work it out.

View attachment 562536
You are on the inside looking at this and just like LLMs require you too have a narrow focus. You believe because AI works well in your narrow focus area of expertise it works well generally. Current AI is dumb. It is useful for specific tasks and needs expertly defined parameters to work with any degree of accuracy. The raindrop question is highly valid as it exposes the inability to apply a very basic check to what is outputted. Did the AI model that provided that much better answer (still a factor of 10 out BTW) inform you of any assumptions it made or point out the elasticity of the interaction would affect the answer or that the slowing of the pellet would add to the deflection and therefore supply a range of possible maximums. No, and that is my point what it outputs is an answer, not necessarily the right answer. Any mathematician would have pointed out their assumptions and the impact of varying assumptions on the answer. Yes you can get AI to do that if you use a specialist to ask the questions in a specialist manner and that is the whole point. It has uses to specialists. It is not clever enough to be used by the public and get answers with a high degree of accuracy. Therein lies the danger. A badly asked question gets an answer taken as correct and mistakes get made based on that. Most people are not clever enough to understand they have not asked the question correctly let alone have any ability to check that answer.
 
Last edited:
Years ago I worked at a place that loaded thousands of pictures into a database. A program was then written to compare a new picture with what was in the database. It would return a set of matches.

The new image was loaded into the database whether a match was found or not. If there was a match then it was included in that set.

As new unknown images were tested so the database grew in different directions. This can be considered learning of sorts.

We do the same thing in our lives to recognise items we see and use that information to decide if something is broken by our knowledge of what a fixed thing should look like. A crack in an engine casing is quite a good example. It is anomalous when compared to good engine cases. This can be contained in a computer program.

A different take on what some call AI.
 
My bad.
I looked at the raindrop answer without my glasses on and straight out of bed (best excuses I can muster).

On reflection, and better vision, Claude has also made a complete hash of it. A deflection of 89.8 degrees is almost perpendicular to the pellets flight. Not a realistic figure at all. Better by a massive margin than asking GPT but still complete cobblers. I can reflect on things said and written. I can have some intuition of what an answer should be. Current AI can not. My own answer is 0.08 degrees. I'm no expert mathematician and can't model a collision with a spherical drop of water and a domed pellet so if someone wants to argue the figure I have little to reply with. I do know that the correct answer is a small one though.

If I assume Claude has simply misstated the pellet deflection as the raindrop deflection then can we assume a much more likely answer of 0.2 degrees for the pellet? Whether we can assume that or not depends on the assumptions made. If it can misstate that though, then what else can it misstate? The raindrop will of course not be deflected but will be dissipated in many directions hence my inability to model it.

As I said it's a tool for experts who can operate it correctly just like any tool. Intelligent it is not.
 
My bad.
I looked at the raindrop answer without my glasses on and straight out of bed (best excuses I can muster).

On reflection, and better vision, Claude has also made a complete hash of it. A deflection of 89.8 degrees is almost perpendicular to the pellets flight. Not a realistic figure at all. Better by a massive margin than asking GPT but still complete cobblers. I can reflect on things said and written. I can have some intuition of what an answer should be. Current AI can not. My own answer is 0.08 degrees. I'm no expert mathematician and can't model a collision with a spherical drop of water and a domed pellet so if someone wants to argue the figure I have little to reply with. I do know that the correct answer is a small one though.

If I assume Claude has simply misstated the pellet deflection as the raindrop deflection then can we assume a much more likely answer of 0.2 degrees for the pellet? Whether we can assume that or not depends on the assumptions made. If it can misstate that though, then what else can it misstate? The raindrop will of course not be deflected but will be dissipated in many directions hence my inability to model it.

As I said it's a tool for experts who can operate it correctly just like any tool. Intelligent it is not.
89.8 degrees from where, I'm guessing it's counting 0 as perpendicular to the travel of the pellet and how do you know your answer of 0.08 is correct too?

Again though, you are using the wrong tool for the job. Using a language model for maths is like using a spoon as a hammer and declaring all hammers as shite.

You are on the inside looking at this and just like LLMs require you too have a narrow focus. You believe because AI works well in your narrow focus area of expertise it works well generally. Current AI is dumb. It is useful for specific tasks and needs expertly defined parameters to work with any degree of accuracy. The raindrop question is highly valid as it exposes the inability to apply a very basic check to what is outputted. Did the AI model that provided that much better answer (still a factor of 10 out BTW) inform you of any assumptions it made or point out the elasticity of the interaction would affect the answer or that the slowing of the pellet would add to the deflection and therefore supply a range of possible maximums. No, and that is my point what it outputs is an answer, not necessarily the right answer. Any mathematician would have pointed out their assumptions and the impact of varying assumptions on the answer. Yes you can get AI to do that if you use a specialist to ask the questions in a specialist manner and that is the whole point. It has uses to specialists. It is not clever enough to be used by the public and get answers with a high degree of accuracy. Therein lies the danger. A badly asked question gets an answer taken as correct and mistakes get made based on that. Most people are not clever enough to understand they have not asked the question correctly let alone have any ability to check that answer.
This is again daft (no offence).

Your arguing that AI isn't intelligence, we know. No one is arguing it is.

You're saying it's dumb, maybe. But that's not relevant either.

You're bringing up my example of the cancer screening AI and narrow tasks, you would never use an LLM for this. If you want an AI/ML model for a specific task you only train it on that specific task. This is not a limiting factor.

And finally your point about the public being too dumb to use it, that's not the AI's fault.
 
Any of you guys watched Ex Machina ?
I highly recommend watching it....certainly thought provoking

Ex Machina | Examining Our Fear of Artificial Intelligence

 
No, it is the AI that is the dumb one not the public. The public are by weight of numbers, average. For sure it's not the fault of the machine. It's just inherent in it's design.

If Claude thinks 0 degrees deflection is perpendicular it has zero understanding of the word deflection. Deflection is a word with a definition in a dictionary. It either can't read a dictionary and understand it or it genuinely thinks a pellet is going to go sideways. The only two options and both paint a poor answer.

I have stated my maths is insufficient for a quality answer. Have you been using the same dictionary as Claude? I have freely admitted my model is basic. My answer is just the best I've seen and from real world observations it's not impossible it may be close. What my answer isn't is 90 degrees out.
 
Personally I wonder if the question is wrong? Does the raindrop deflect the round towards a perpendicular angle at all, or rather 'nudge' it into a parallel trajectory path. Much like what they've done with the DART mission?
Which could be why it's saying the deflection is towards 90°... It is, but only very briefly with far less force than it's carrying.
 
A new model just dropped from OpenAI.


"OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)."

As I said, people were using the wrong tools for the wrong jobs and crying about it's bad outputs.
 
Could not make a bogholder bracket from workshop scraps though, or the millenia of small simple tasks even a half witted human can do.

Had the "pleasure" of working with graduates with mainframe design programmes since 1998, bringing drawings to me and I tell them "can't make that" or "that will break"

Just because they can click a few buttons they think the thing they have "drawn" will work / can be made,

49 years experience actually making stuff often top trumps multiple degrees and Catia / Pro engineer / name another of your fancy.

Only this year have been involved in advising folks about a cast item that breaks on bleedin' aeroplanes, cause is obvious, a machined face with no stress riser rad, breaks all the time in the same place, designed by a World famous company, God knows how it got into production, their machine shop should have picked it up, but in some places minions cannot speak up, I was lucky in working in intense competitive situations where everyone's output was taken aboard, as no time for fannying about, and people died if stuff was got wrong.

ATB, ED
 
Just because they can click a few buttons they think the thing they have "drawn" will work / can be made,
Design for manufacture.

See it incessantly in jewellery too... Oh look, I've zoomed in on features < 0.1mm in size, forgetting that by the time it's been milled/grown, cast, cleaned up and polished they're going to be nothing more than dust.

I use Rhino, but it's usually for stuff I could make by hand, and applying the same rules.

When I'm not querying CoPilot on, for example, likely root causes of errors from GStreamer (today's fun).
 
Personally I wonder if the question is wrong? Does the raindrop deflect the round towards a perpendicular angle at all, or rather 'nudge' it into a parallel trajectory path. Much like what they've done with the DART mission?
Which could be why it's saying the deflection is towards 90°... It is, but only very briefly with far less force than it's carrying.
If it moved it towards 90 at any time the pellet would then continue towards 90 unless it then hits something else or Newton got it wrong. The angle the pellet leaves at is the maximum angle ever achieved by the deflection unless the collision is highly complex and the pellet suffers multiple deflections (unlikely).
 
I think he'll be ok as long as the brothers dont have a big bust up on the reunion tour,
oops! miss read the thread title!
i'll get my coat!
 
Not LLM but OMG!!! Not 100% sure if it's legit. No reason it couldn't be, considering Boston Dynamics capabilities, but that's a MUCH smaller package...


View attachment 565701
 
Back
Top