Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A human mind works by cross referencing sensory inputs with internal state, which may result in answering a question through some sort of reasoning, which may resemble the architecture of a neural net.

An LLM is stateless function that answers a question. It constructs a working representation of the question. It cannot be aware of itself because there is nothing to be aware of. It is a process, not a thing.

Intelligence and self aware are fluffy and poorly defined. But you can’t be self aware if you don’t have a self. And a representation of a prompt is not a self. I don’t think we should consider anything that does not exist through continuous time to be intelligent personally.



Any stateful function can be made stateless by turning the state into an argument to the function and the updated state into an additional return value: y := f(x) becomes y, z' := f(x, z). The usual implementation for LLMs are stateless in exactly this way. The actual operation is autoregressive - ie, stateful and developing over time, where the state is the set of intermediate activations of the network - but it's often expressed as a stateless function to play nice with accelerators. So, on this basic point I think you're confused about how these operate.

Intelligence and self-awareness are distinct concepts.

LLMs work with text - text /is/ their sensorium. New multimodal models process both text and images, and more broadly multimodal models are on their way. But even for text-only LLMs, we can define the text as sensory input, which is combined and cross referenced with state.

And, really, there's two kinds of state: the intermediate model activations and the model weights themselves. Together, these are something like short term and long term memory.

The rest of your arguments are very anthropocentric. Why should continuous time matter for intelligence? That seems entirely arbitrary.

How often do humans apply reasoning to make a decision? Are humans not intelligent when not employing reasoning? Does reasoning require language? And are humans without language intelligent? What about animals? Any definition of intelligence really should spend some time grappling with the facts of animal intelligence...

Is a dead human intelligent? If not, the important part of intelligence would seem to be the process, not the thing/body.


Making something “autoregressive” by moving the numbers around doesn’t mean it’s meaningless. A stateless thing is stateless. It has no internal state. It has no self to be aware of.

> Why should continuous time matter for intelligence? That seems entirely arbitrary.

Intelligence is arbitrary. This is a personal view that helps keep it meaningful. If you exist discretely you may as well not exist. You’re just a concept

> How often do humans apply reasoning to make a decision? Are humans not intelligent when not employing reasoning? Does reasoning require language? And are humans without language intelligent? What about animals? Any definition of intelligence really should spend some time grappling with the facts of animal intelligence...

You’re gradually shifting the bar from “self aware” to “capable” and I don’t care about the latter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: