The text output of a llm is the thought process. In this context the main difference between humans and llms, is that llms can’t have internalized thoughts. There are of course other differences to, like the fact that humans have a wider gamut of input: visuals, sound, input from other bodily functions. And the fact that we have live training.
It's not clear whether or not LLMs have internal thoughts -- each token generation could absolutely have a ton of thought-like modelling in the hidden layers of the network.
What is known is that these internal thoughts get erased each time a new token is generated. That is, it's starting from scratch from the contents of the text each time it generates a word. But you could postulate that similar prompt text leads to similar "thoughts" and/or navigation of the concept web, and therefore the thoughts are continuous in a sense.
True, LLMs definetly has something that is "thought-like".
But todays networks lacks the recursion(feedback where the output can go directly to the input) that is needed for the type of internalized thoughts that humans have. I guess this is one thing you are pointing at by mentioning the continuousnes of the internals of LLMs.