Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs are very good at outputting texts that eerily mimic human language.

What a bizarre claim. If LLMs are not actually outputting language, why can I read what they output then? Why can I converse with it?

It's one thing to claim LLMs aren't reasoning, which is what you later do, but you're disconnected from reality if you think they aren't actually outputting language.



You have missed the point entirely.


My point is that you made a bad point. Just be more precise with your phrasing if that wasn't your intended meaning.


Is there a block button? Or a filter setting? You are see unaware and uninquisitive of actual human language, you cannot see the gross assumptions you are making.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: