You think it will be 25 years before we have a drop in replacement for most office jobs?
I think it will be less than 5 years.
You seem to be assuming that the rapid progress in AI will suddenly stop.
I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
Even if there is no progress in scaling memristors or any exotic new paradigm, high speed memory organized to localize data in frequently used neural circuits and photonic interconnects surely have multiple orders of magnitude of scaling gains in the next several years.
> You seem to be assuming that the rapid progress in AI will suddenly stop.
And you seem to assume that it will just continue for 5 years. We've already seen the plateau start. OpenAI has tacitly acknowledged that they don't know how to make a next generation model, and have been working on stepwise iteration for almost 2 years now.
Why should we project the rapid growth of 2021–2023 5 years into the future? It seems far more reasonable to project the growth of 2023–2025, which has been fast but not earth-shattering, and then also factor in the second derivative we've seen in that time and assume that it will actually continue to slow from here.
At this point, the lack of progress since April 2023 is really what is shocking.
I just looked on midjourney reddit to make sure I wasn't missing some new great model.
Instead what I notice is the small variations on the themes I have already seen a thousand times a year ago now. Midjourney is so limited in what it can actually produce.
I am really worried that all this is much closer to a parlor trick than AGI.
"simple trick or demonstration that is used especially to entertain or amuse guests"
It all feels more and more like that to me than any kind of progress towards general intelligence.
There's this [0]. But also o1/o3 is that acknowledgment. They're hitting the limits of scaling up models, so they've started scaling compute [1]. That is showing some promise, but it's nowhere near the rate of growth they were hitting while next gen models were buildable.
No, but there's really very little reason to think that that makes the ol' magic robots less shit in any sort of well-defined way. Like, it certainly _looks_ like they've plateaued.
I often suspect that the tech industry's perception of reality is skewed by Moore's Law. Moore's Law is, quibbles aside, basically real, and has of course had a dramatic impact on the tech industry. But there is a tendency to assume that that sort of scaling is _natural_, and the norm, and should just be expected in _everything_. And, er, that is not the case. Moore's Law is _weird_.
> You seem to be assuming that the rapid progress in AI will suddenly stop.
> I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
It's better to talk about actual numbers to characterise progress and measure scaling:
"
By scaling I usually mean the specific empirical curve from the 2020 OAI paper. To stay on this curve requires large increases in training data of equivalent quality to what was used to derive the scaling relationships.
"[^2]
"I predicted last summer: 70% chance we fall off the LLM scaling curve because of data limits, in the next step beyond GPT4.
[…]
I would say the most plausible reason is because in order to get, say, another 10x in training data, people have started to resort either to synthetic data, so training data that's actually made up by models, or to lower quality data."[^0]
“There were extraordinary returns over the last three or four years as the Scaling Laws were getting going,” Dr. Hassabis said. “But we are no longer getting the same progress.”[^1]
o1 proved that synthetic data and inference time is a new ramp. There will be more challenges and more innovations. There is a lot of room in hardware, software, model training and model architecture left.
> It's not realistic to make firm quantified predictions any more specific than what I have given.
Then do you actually know what you're talking about or are you handwaving? I'm not trying to be offensive but business plans can't be made based on a lack of predictions.
> We will likely see between 3 and 10000 times improvement in efficiency or IQ or speed of LLM reasoning in the next 5 years
That variance is too large to take you seriously, unfortunately. That's unfortunate because I was really hoping you had an actionable insight for this discussion. :(
If I, for instance, tell my wife I can improve our income by 3x or 1000x but I don't really know, there's no planning that can be done and I'll probably have to sleep on the couch until I figure out what the hell I'm doing.
> business plans can't be made based on a lack of predictions.
They can. It's called "taking a risk". Which is what startups are about, right?
It's hard to give a specific prediction here (I'm leaning towards 10x-1000x in the next 5 years), but there's also no good reason to believe progress will stop, because a) there's many low and mid-hanging fruits to pick, as outlined by GP, and b) because it never did so far, so why would it stop now specifically?
Why did we stop going to the moon and flying commercial supersonic?
Some things that are technologically possible are not economically viable. AI is a marvel but I'm not convinced it will actually plug into economic gains that justify the enormous investment in compute.
Spoken like a young man. I salute you. However, on your journey remember that risk of ruin is what you want to minimize relative to your estimated rewards. That is, not all risks can be afforded. I happen to have a limited budget, perhaps you don't and costs in terms of money and time don't matter for you.
Ruin can set you back years or decades or permanently and then you find yourself on a ycombinator thread hopelessly trying to find someone who can meaningfully quantity and forecast future medium term AI progress so that you can hire them to help your ongoing project. Alas all you get is the comments' section. :-)
> but there's also no good reason to believe progress will stop, because a) there's many low and mid-hanging fruits to pick, as outlined by GP, and b) because it never did so far, so why would it stop now specifically?
Specifically, due to lack of data. Please refer to the earlier comment[^0]: deep learning requires vast amounts of data. Current models have already been trained on the entire internet and corpus of published human knowledge. Models are now being trained on synthetic data and we're running out of that too. This data bottleneck has been widely reported and documented.
> If I, for instance, tell my wife I can improve our income by 3x or 1000x but I don't really know, there's no planning that can be done and I'll probably have to sleep on the couch until I figure out what the hell I'm doing.
For most people, even a mere 3x in the next 5 years is huge, it's 25% per year growth.
3x in 5 years is a reasonable low-ball for hardware improvements alone. Caveat: top-end silicon is now being treated as a strategic asset, so there may be wars over it, driving up prices and/or limiting progress, even on the 5-year horizon.
I'm unclear why your metaphor would have you sleeping on the sofa: If tonight you produce a business idea for which you can be 2σ-confident that it will give you an income 5 years from now in the range [3…1000]x, you can likely get a loan for a substantially bigger house tomorrow than you were able to get yesterday; in the UK that's a change slightly larger than going from the median average full-time salary to the standard member of parliament salary.
(The reason behind this, observed lowering of compute costs, has been used even decades ago to delay investment in compute until the compute was cheaper).
The arguments I've seen elsewhere for order-of-10,000x* cost improvements (which is a proxy for efficiency and speed if not IQ) is based on various different observations cost reductions** since ChatGPT came out — personally, I doubt that the high end of that would come to pass, my guess is those all represent low-hanging fruit that can't be picked twice, but even then I would still expect there to be some opportunity for further gains.
* The original statement had one more digit in it than yours, but this doesn't make much difference to the argument either way
Also office jobs will be adapted to be a better fit to what AI can do, just as manufacturing jobs were adapted so that at least some tasks could be completed by robots.
Not my downvote, just the opposite but I think you can do a lot in an office already if you start early enough . . .
At one time I would have said you should be able to have an efficient office operation using regular typewriters, copiers, filing cabinets, fax machines, etc.
And then you get Office 97, zip through everything and never worry about office work again.
I was pretty extreme having a paperless office when my only product is paperwork, but I got there. And I started my office with typewriters, nice ones too.
Before long Google gets going. Wow. No-ads information superhighway, if this holds it can only get better. And that's without broadband.
But that's besides the point.
Now it might make sense for you to at least be able to run an efficient office on the equivalent of Office 97 to begin with. Then throw in the AI or let it take over and see what you get in terms of output, and in comparison. Microsoft is probably already doing this in an advanced way. I think a factor that can vary over orders of magnitude is how does the machine leverage the abilities and/or tasks of the nominal human "attendant"?
One type of situation would be where a less-capable AI could augment a defined worker more effectively than even a fully automated alternative utilizing 10x more capable AI. There's always some attendant somewhere so you don't get a zero in this equation no matter how close you come.
Could be financial effectiveness or something else, the dividing line could be a moving target for a while.
You could even go full paleo and train the AI on the typewriters and stuff just to see what happens ;)
But would you really be able to get the most out of it without the momentum of many decades of continuous improvement before capturing it at the peak of its abilities?
I think it will be less than 5 years.
You seem to be assuming that the rapid progress in AI will suddenly stop.
I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
Even if there is no progress in scaling memristors or any exotic new paradigm, high speed memory organized to localize data in frequently used neural circuits and photonic interconnects surely have multiple orders of magnitude of scaling gains in the next several years.