In people there is a difference between unconscious hallucinations vs. intentional creativity. However, there might be situations where they're not distinguishable. In LLMs, it's hard to talk about intentionality.
A hallucination isn’t a creative new idea, it’s blatantly wrong information, provably.
If an LLM had actual intellectual ability it could tell “us” how we can improve models. They can’t. They’re literally defined by their token count and they use statistics to generate token chains.
They’re as creative as the most statistically relevant token chains they’ve been trained on by _people_ who actually used intelligence to type words on a keyboard.
Hallucinations are not novel ideas. They are novel combinations of tokens constrained by learned probability distributions.
I have mentioned Hume before, and will do so again. You can combine "golden" and "mountain" without seeing a golden mountain, but you cannot conjure "golden" without having encountered something that gave you the concept.
LLMs may generate strings they have not seen, but those strings are still composed entirely from training-derived representations. The model can output "quantum telepathic blockchain" but each token's semantic content comes from training data. It is recombination, not creation. The model has not built representations of concepts it never encountered in training; it is just sampling poorly constrained combinations.
Can you distinguish between a false hallucination and a genuinely novel conceptual representation?
Sure they do. We call them hallucinations and complain that they're not true, however.