How exactly is the bottom going to fall out? And are you really trying to present that you have practical experience building comparable tools to an LLM prior to the Transformer paper being written?
Now, there does appear to be some shenanigans going on with circular financing involving MSFT, NVIDIA, and SMCI (https://x.com/DarioCpx/status/1917757093811216627), but the usefulness of all the modern LLMs is undeniable. Given the state of the global economy and the above financial engineering issues I would not be surprised that at some point there isn't a contraction and the AI hype settles down a bit. With that said, LLMs could be made illegal and people would still continue running open source models indefinitely and organizations will build proprietary models in secret, b/c LLMs are that good.
Since we are throwing out predictions, I'll throw one out. Demand for LLMs to be more accurate will bring methods like formal verification to the forefront and I predict eventually model/agents will start to be able to formalize solved problems into proofs using formal verification techniques to guarantee correctness. At that point you will be able to trust the outputs for things the model "knows" (i.e. has proved) and use the probably correct answers the model spits out as we currently do today.
Probably something like the following flow:
1) Users enter prompts
2) Model answers questions and feeds those conversations to another model/program
3) Offline this other model uses formal verification techniques to try and reduce the answers to a formal proof.
4) The formal proofs are fed back into the first model's memory and then it uses those answers going forward.
5) Future questions that can be mapped to these formalized proofs can now be answered with almost no cost and are guaranteed to be correct.
> And are you really trying to present that you have practical experience building comparable tools to an LLM prior to the Transformer paper being written?
I believe (could be wrong) they were talking about their prior GOFAI/NLP experience when referencing scaling systems.
In any case, is it really necessary to be so harsh about over-confidence and then go on to predict the future of solving hallucinations with your formal verification ideas?
Now, there does appear to be some shenanigans going on with circular financing involving MSFT, NVIDIA, and SMCI (https://x.com/DarioCpx/status/1917757093811216627), but the usefulness of all the modern LLMs is undeniable. Given the state of the global economy and the above financial engineering issues I would not be surprised that at some point there isn't a contraction and the AI hype settles down a bit. With that said, LLMs could be made illegal and people would still continue running open source models indefinitely and organizations will build proprietary models in secret, b/c LLMs are that good.
Since we are throwing out predictions, I'll throw one out. Demand for LLMs to be more accurate will bring methods like formal verification to the forefront and I predict eventually model/agents will start to be able to formalize solved problems into proofs using formal verification techniques to guarantee correctness. At that point you will be able to trust the outputs for things the model "knows" (i.e. has proved) and use the probably correct answers the model spits out as we currently do today.
Probably something like the following flow:
1) Users enter prompts
2) Model answers questions and feeds those conversations to another model/program
3) Offline this other model uses formal verification techniques to try and reduce the answers to a formal proof.
4) The formal proofs are fed back into the first model's memory and then it uses those answers going forward.
5) Future questions that can be mapped to these formalized proofs can now be answered with almost no cost and are guaranteed to be correct.