Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>They belong in different categories

Categories of _what_, exactly? What word would you use to describe this "kind" of which LLMs and humans are two very different "categories"? I simply chose the word "cognition". I think you're getting hung up on semantics here a bit more than is reasonable.





This is "category" in the sense of Gilbert Ryle's category error.

A logical type or a specific conceptual classification dictated by the rules of language and logic.

This is exactly getting hung up on the precise semantic meaning of the words being used.

The lack of precision is going to have huge consequences with this large of bets on the idea that we have "intelligent" machines that "think" or have "cognition" when in reality we have probabilistic language models and all kinds of category errors in the language surrounding these models.

Probably a better example here is that category in this sense is lifted from Bertrand Russell’s Theory of Types.

It is the loose equivalent of asking why are you getting hung up on the type of a variable in a programming language? A float or a string? Who cares if it works?

The problem is in introducing non-obvious bugs.


> Categories of _what_, exactly?

Precisely. At least apples and oranges are both fruits, and it makes sense to compare e.g. the sugar contents of each. But an LLM model and the human brain are as different as the wind and the sunshine. You cannot measure the windspeed of the sun and you cannot measure the UV index of the wind.

Your choice of the words here was rather poor in my opinion. Statistical models do not have cognition any more than the wind has ultra-violet radiation. Cognition is a well studied phenomena, there is a whole field of science dedicated to cognition. And while cognition of animals are often modeled using statistics, statistical models in them selves do not have cognition.

A much better word here would by “abilities”. That is that these tests demonstrate the different abilities of LLM models compared to human abilities (or even the abilities of traditional [specialized] models which often do pass these kinds of tests).

Semantics often do matter, and what worries me is that these statistical models are being anthropomorphized way more then is healthy. People treat them like the crew of the Enterprise treated Data, when in fact they should be treated like the ship‘s computer. And I think this because of a deliberate (and malicious/consumer hostile) marketing campaign from the AI companies.


It's easy to handwave away if you assign arbitrary analogies though.

If we stay on topic, it's much harder to do since we don't actually know how the brain works. Outside at least that it is a computer doing (almost certainly) analog computation.

Years ago I built a quasi mechanical calculator. The computation was done mechanically, and the interface was done electronically. From a calculators POV it was an abomination, but a few abstraction layers down, they were both doing the same thing, albeit my mecha-calc being dramatically worse at it.

I don't think the brain is an LLM, like my Mecha-calc was a (slow) calculator, but I also don't think we know enough about the brain to firmly put it many degrees away from an LLM. Both are infact electrical signal processors with heavy statistical computation. I doubt you believe the brain is a trans-physical magic soul box.


But we do know how the brain works, we have extensively studied the brain, it is probably one of the most studied phenomena in our universe (well barring alien science) and we do know it is not a computer but a neural network[1].

I don’t believe the brain is a trans-physical magic soul box, nor do I think an LLM is doing anything similar to an LLM (apart from some superficial similarities; some [like the artificial neural network] are in an LLMs because it was inspire by the brain).

We use the term cognition to describe the intrinsic properties of the brain, and how it transforms stimulus to a response, and there are several fields of science dedicated to study this cognition.

Just to be clear, you can describe the brain as a computer (a biological computer; totally distinct from a digital, or even mechanical computers), but that will only be an analogy, or rather, you are describing the extrinsic properties of the brain which it happens to share some of which with some of our technology.

---

1: Note, not an artificial neural network, but an OG neural network. AI models were largely inspired by biological brains, and in some parts model brains.


Wind and sunshine are both types of weather, what are you talking about?

They both affect the weather, but in a totally different way, and by completely different means. Similarly the mechanisms in which the human brain produces output is completely different from the mechanism in which an LLM produces output.

What I am trying to say is that the intrinsic properties of the brain and an LLM are completely different, even though the extrinsic properties might appear the same. This is also true of the wind and the sunshine. It is not unreasonable to (though I would disagree) that “cognition” is almost the definition of the sum of all intrinsic properties of the human mind (I would disagree only on the merit of animal and plant cognition existing and the former [probably] having similar intrinsic properties as human cognition).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: