Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's just that I understand LLMs better than you, and I know that they are very different from human intelligence. Here's a couple of differences:

- LLMs use fixed resources when computing an answer. And to the extent that they don't, they are function calling and the behaviour is not attributable to the LLMs. For example when using a calculator, it is displaying calculator intelligence.

- LLMs do not have memory, and if they do it's very recent and limited, and distinct from any being so far. They don't remember what you said 4 weeks ago, and they don't incorporate that into their future behaviour. And if they do, the way they train and remember is very distinct from that of humans and relates to it being a system being offered as a free service to multiple users. Again to the extent that they are capable of remembering, their properties are not that of LLMs and are attributable to another layer called via function calling.

LLMs are a perception layer for language, and perhaps for output generation, but they are not the intelligence.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: