I submit humans are no different. It can take years of seemingly good communication with a human til you finally realize they never really got your point of view. Language is ambigious and only a tool to communicate thoughts. The underlying essence, thought, is so much more complex that language is always just a rather weak approxmiation.
The difference is that large language models don't think at all. They just string language "tokens" together using fancy math and statistics and spew them out in response to the tokens they're given as "input". I realize that they're quite convincing about it, but they're still not doing at all what most people think they're doing.
As far as I've read there are opinions to the contrary; most LLMs start out as that, learning which word best comes next and that's it. But instruct tuned models get fine-tuned into something that's in between.
I imagine it ends up with extra logic behind selecting the next word in instruct compared to base model.
The argument is very reductionist though, since if I ask "What is a kind of fruit?" to a human...they really are just providing the most likely word based on their corpus of knowledge. Difference atm is that humans have ulterior motives, making them think "why are they asking me this? When's lunch? Damn this annoying person stopped me to ask me dumb questions, I really gotta get home to play games".
Once models start getting ulterior motives then I think the space for logic will improve; atm even during fine tuning there's not much imperative to it learning any decent logic because it has no motivations beyond "which response answers this query" - a human built like that would work exactly the same, and you see the same kind of thoughtless regurgitative behaviours once people have learned a simple job too well and are on autopilot.
> "I know a lot of people who, according to your definition, also actually dont think at all. They just string together words ..."
Politicians, when asked to make laws related to technology? Heck, an LLM might actually do better than the average octogenarian we've got doin' that job currently.
I understand it to be by predicting the next most likely output token based on previous user input.
I also understand that, simplistic though the above explanation is and perhaps is even wrong in some way, it to be a more thorough explanation than anyone thus far has been able to provide about how, exactly, human consciousness and thought works.
In any case, my point is this: nobody can say “LLMs don’t reason in the same way as humans” when they can’t say how human beings reason.
I don’t believe what LLMs are doing is in any way analogous to how humans think. I think they are yet another AI parlor trick, in a long line of AI parlor tricks. But that’s just my opinion.
Without being able to explain how humans think, or point to some credible source which explains it, I’m not going to go around stating that opinion as a fact.
Does your brain completely stop doing anything between verbal statements (output)? An LLM does stop doing stuff between requests to generate a string of language tokens (their entire purpose). When not actually generating tokens, an LLM doesn't sit there and think things like "Was what I just said correct?" or "Hmm. That was an interesting discussion. I think I'll go research more on the topic". Nope. It just sits there idle, waiting for another request to generate text. Does your brain ever sit 100% completely idle?
What does that have to do with how the human brain operates while generating a thought as compared to how an LLM generates output? You’ve only managed to state something everyone knows (people think about stuff constantly) without saying anything new about the unknown being discussed (how people think.)