Maybe? But it also seems like you are that you are not accounting for new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself that didn't come from what it was trained on, and produces mostly garbage (I only half agree lol, and I think "novelty" is under-specified here).
When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results.
But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice.
I'm not here to convince you whether Markov chains are helpful for your use cases or not. I know from personal experience that even in cases where I have a logically constrained query I will receive completely nonsensical responses¹.
It does this all the time, but as often as not then outputs nonsense again, just different nonsense, and if you keep it running long enough it starts repeating previous errors (presumably because some sliding window is exhausted).
That's been my general experience and that was the most recent example. People keep forgetting that unless they can independently verify the outputs they are essentially paying OpenAI for the privilige of being very confidently gaslighted.
It would be a really nice exercise - for which I unfortunately do not have the time - to have a non-trivial conversation with the best models of the day and then to rigorously fact-check every bit of output to determine the output quality. Judging from my own (probably not a representative sample) experience it would be a very meager showing.
I use AI as a means of last resort only now and then mostly as a source of inspiration rather than a direct tool aiming to solve an issue. And like that it has been useful on occasion, but it has at least as often been a tremendous waste of time.
When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results.
But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice.