I understand it to be by predicting the next most likely output token based on previous user input.
I also understand that, simplistic though the above explanation is and perhaps is even wrong in some way, it to be a more thorough explanation than anyone thus far has been able to provide about how, exactly, human consciousness and thought works.
In any case, my point is this: nobody can say “LLMs don’t reason in the same way as humans” when they can’t say how human beings reason.
I don’t believe what LLMs are doing is in any way analogous to how humans think. I think they are yet another AI parlor trick, in a long line of AI parlor tricks. But that’s just my opinion.
Without being able to explain how humans think, or point to some credible source which explains it, I’m not going to go around stating that opinion as a fact.
Does your brain completely stop doing anything between verbal statements (output)? An LLM does stop doing stuff between requests to generate a string of language tokens (their entire purpose). When not actually generating tokens, an LLM doesn't sit there and think things like "Was what I just said correct?" or "Hmm. That was an interesting discussion. I think I'll go research more on the topic". Nope. It just sits there idle, waiting for another request to generate text. Does your brain ever sit 100% completely idle?
What does that have to do with how the human brain operates while generating a thought as compared to how an LLM generates output? You’ve only managed to state something everyone knows (people think about stuff constantly) without saying anything new about the unknown being discussed (how people think.)
I also understand that, simplistic though the above explanation is and perhaps is even wrong in some way, it to be a more thorough explanation than anyone thus far has been able to provide about how, exactly, human consciousness and thought works.
In any case, my point is this: nobody can say “LLMs don’t reason in the same way as humans” when they can’t say how human beings reason.
I don’t believe what LLMs are doing is in any way analogous to how humans think. I think they are yet another AI parlor trick, in a long line of AI parlor tricks. But that’s just my opinion.
Without being able to explain how humans think, or point to some credible source which explains it, I’m not going to go around stating that opinion as a fact.