> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
The bigger problem is that, whichever term you choose (confabulate or hallucinate), that's what they're always doing. When they produce a factually correct answer, that's just as much of a random fabrication based on training data as when they're factually incorrect. Either of those terms falsely implies that they "know" the answer when they get it right, but "confabulate" is worse because there isn't "gaps in their memory", they're just always making things up.
About 2 years ago I was using Whisper AI locally to translate some videos, and "hallucinations" is definitely the right phrase for some of its output! So just like you might expect from a stereotypical schizo: it would stay on-task for a while, but then start ranting about random things, or "hearing things", etc.
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".