I am assuming that millions of dollars have already been spent trying to get LLMs to hallucinate less.
Even if Problems feel soluble, they often aren't. You might have to invent an entirely new paradigm of text generation to solve the hallucination problem. Or it could be the Collatz Conjecture of LLMs, that it "feels" so possible, but you never really get there.
Fusion will at best have a few dozen sales once it's commercially viable and then take decades to realise, but you can sell AI stuff to millions of customers for $20 / month each and do it today.
Even if Problems feel soluble, they often aren't. You might have to invent an entirely new paradigm of text generation to solve the hallucination problem. Or it could be the Collatz Conjecture of LLMs, that it "feels" so possible, but you never really get there.