Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> o3 is also a significant hallucinator. I spent quite a bit of time with it last weekend and found it to be probably far worse than any of the other top models. The catch is that it its hallucinations are quite sophisticated. Unless you are using it on material for which you are extremely knowledgeable, you won't know.

At least 3/4 of humans identify with a religion which at best can be considered a confabulation or hallucination in the rigorous terms you're using to judge LLMs. Dogma is almost identical to the doubling-down on hallucinations that LLMs produce.

I think what this shows about intelligence in general is that without grounding in physical reality it tends to hallucinate from some statistical model of reality and confabulate further ungrounded statements without strong and active efforts to ground each statement in reality. LLMs have the disadvantage of having no real-time grounding in most instantiations; Gato and related robotics projects exempted. This is not so much a problem with transformers as it is with the lack of feedback tokens in most LLMs. Pretraining on ground truth texts can give an excellent prior probability of next tokens and I think feedback either in the weights (continuous fine-tuning) or real-world feedback as tokens in response to outputs can get transformers to hallucinate less in the long run (e.g. after responding to feedback when OOD)



Arguing that many humans are stupid or ignorant does not support the idea that an LLM is intelligent. This argument is reductive in that it ignores the many, many diverse signals influencing the part of the brain that controls speech. Comparing a statistical word predictor and the human brain isn’t useful.


I'm arguing that it's natural for intelligent beings to hallucinate/confabulate in the case where ground truth can't be established. Stupidity does not apply to e.g. Isaac Newton or Kepler who were very religious, and any ignorance wasn't due to a fault in their intelligence per se. We as humans make our best guesses for what reality is even in the cases where it can't be grounded, e.g. string theory or M-theory if you want a non-religious example.

Comparing humans to transformers is actually an instance of the phenomenon; we have an incomplete model of "intelligence" and we posit that humans have it but our model is only partially grounded. We assume humans ~100% have intelligence, are unsure of which animals might be intelligent, and are arguing about whether it's even well-typed to talk about transformer/LLM intelligence.


> religion which at best can be considered a confabulation or hallucination in the rigorous terms you're using to judge LLMs

Non-religious people are not exempt. Everyone has a worldview (or prior commitments, if you like) through which they understand the world. If you encounter something that contradicts your worldview directly, even repeatedly, you are far more likely to "hallucinate" an understanding of the experience that allows your worldview to stay intact than to change your worldview.

I posit that humans are unable to function without what amounts to a religion of some sort -- be it secular humanism, nihilism, Christianity, or something else. When one is deposed at a societal level, another rushes in to fill the void. We're wired to understand reality through definite answers to big questions, whatever those answers may be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: