Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not understanding, it’s explanation. I read the paper, I posted it.

Start at what are human explanations:

https://www.alisongopnik.com/Papers_Alison/Explain%20final.p...

Now what are words in relation to that drive?

“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024

What are LLMs?

As language or tokens cannot represent anything of merit in brains accurately, then the interpretation of what is semantic vs what is a task variable action potential is subject to the Gopnik problem.

It’s an enforced circularity that never allows the brain/ecology to speak for itself in its native process.



thoughts would be more analogous to the weights in the the LLM, rather than language or tokens as you mention




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: