Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is why I believe that anthropomorphizing LLMs, at least with respect to cognition, is actually a good way of thinking about them.

There's a lot of surprise expressed in comments here, as is in the discussion on-line in general. Also a lot of "if only they just did/didn't...". But neither the problem nor the inadequacy of proposed solutions should be surprising; they're fundamental consequences of LLMs being general systems, and the easiest way to get a good intuition for them starts with realizing that... humans exhibit those exact same problems, for the same reasons.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: