Yes, it will replace human thinking. Thats quite literally the explicit goal of every AI company.
Historically every technological recolution serves to replace some facet of human labor (usually with the incentive of squeezing profits as technology gets cheaper over time, but wages do not).
Industrial revolution == automate non dexterous manual labor
Information age == automate "computational"/numerical thinking
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
Keep in mind - this is not reaffirming HN's anti-AGI/extremely long timeline beliefs.
The article explicitly states that he thinks we will have an AI system that "Will be able to do your taxes" by 2028, and a system that could basically replace all white collar work by 2032.
I think an autonomous system that can reliably do your taxes with minimal to no input is already very very good, and 2032 being the benchmark time for being able to replace 90% - all white collar work is pretty much AGI, in my opinion.
Fwiw I think the fundamental problems he describes in the article that are AGI blockers are likely to be solved sooner than we think. Labs are not stupid enough to throw all their eggs and talent into the scaling basket, they are most definitely allocating resources to tackling problems like the ones described in the article, while putting the remaining resources into bottom line production (scale current model capibilities w/o expensive R&D and reduce serving/training cost).
When was OTS written again? That was effectively an expert system that could do your taxes and it was around at least ten years ago. It didn't even need transformers.
No one has a good benchmark for what AGI is. Already LLMs are more capable at most tasks than most random people off the street. I think at this point people keep asking about because they're trying to ask some deeper philosophical question like "when will it be human" but don't want to say that because it sounds silly.
> Already LLMs are more capable at most tasks than most random people off the street.
I cannot imagine having the narrow conceptualization of the universe of human tasks necessary to even be able to say this with a straight face, irrespective of ones qualitative assessment of how well LLMs do the things that they are capable of doing.
The first line is just some cope people use to tell themselves they are different.
Someone using AI won't "take" your job, they'll just get more done than you and when the company inevitably fires more people because AI can continue to do more work autonomously, the first people to go will be the people not producing as much (i.e, the people not using AI).
In the limit both groups are getting their jobs taken by AI. Knowing how to use AI is not some special skill.
Imo this is a misunderstanding of what AI companies want AI tools to be and where the industry is heading in the near future. The endgame for many companies is SWE automation, not augmentation.
To expand -
1. Models "reason" and can increasingly generate code given natural language. Its not just fancy autocomplete, its like having an intern - mid level engineer at your beck and call to implement some feature. Natural language is generally sufficient enough when I interact with other engineers, why is it not sufficient for an AI, which (in the limit), approaches an actual human engineer?
2. Business wise, companies will not settle for augmentation. Software companies pay tons of money in headcount, its probably most mid-sized companies top or second line item. The endgame for leadership at these companies is to do more with less. This necessitates automation (in addition to augmenting the remaining roles).
People need to stop thinking of LLMs as "autocomplete on steroids" and actually start thinking of them as a "24/7 junior SWE who doesn't need to eat or sleep and can do small tasks at 90% accuracy with some reasonable spec." Yeah you'll need to edit their code once in a while but they also get better and cost less than an actual person.
This sounds exactly like the late '90s all over again. All the programming jobs were going to be outsourced to other countries and you'd be lucky to make minimum wage.
And then the last 25 years happened.
Now people are predicting the same thing will happen, but with AI.
The problem then, as is now, is not that coding is hard, it's that people don't know what the hell they actually want.
Software companies make a single copy and sell it a billion times. The revenue per employee is insane. The largest companies are trying to make the best product in the world and seek out slight advantages.
The cost saving mindset you are describing is found in companies where software isn’t a core part of the business.
That is the ideal for the management types and the AI labs themselves yes. Copilots are just a way to test the market, and gain data and adoption early. I don't think it is much of a secret anymore. We even see benchmarks created (e.g. OpenAI recently did one) that are solely about taking paid work away from programmers and how many "paid tasks" it can actually do. They have a benchmark - that's their target.
As a standard pleb though I can understand why this world; where the people with connections, social standing and capital have an advantage isn't appealing to many on this forum. If anyone can do something - other advantages that aren't as easy to acquire matter more relatively.
I mean duhh? Is there anyone who denies this is what they would want to happen? That's capitalism. They'd also kill all other roles if they could - and there are other very expensive personnel like sales people, marketers, accountants etc.
Whether it's going to happen ,when , and by how much is a different matter to what they want though.
Pesimistically, you are right, there will be no new jobs. The entire goal of these companies is to monopolize near 0 marginal cost labor. Another way to read this is that humans are unnecessary for economic progress anymore.
All that I hope for in this case is that governments actually take this seriously and labs/governments/people work together to create better societal systems to handle that. Because as it stands, under capitalism I don't think anyone is going to willingly give up the wealth they made from AI to spread to the populus as UBI. This is necessary in some capitalist system (if we want to maintain that) since its built on consumption and spending.
Though if its truly an "abundance" scenario then I'd imagine it probably wouldn't matter that people don't have jobs since I'd assume everything would be dirt cheap and quality of life would be very high. Though personally I am very cynical when it comes to "agi is magic pixie dust that can solve any problem" takes, and I'd assume in the short term companies will lay off people in swathes since "AI can do your job," but AI will be nowhere close to increasing those laid-off people's quality of life. It'll be a tough few years if we don't actually get transformative AI.
And as it stands, AI is nowhere close to (1) and (2), but is pretty close to making all of (3) redundant.
This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models
probably aren't capable of).
(3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.
These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.
You are thinking about "hard" and "easy" in the wrong frame of mind. What Tesla does is not "easy" either. Their moat is manufacturing and the R&D they've spent on codesigning their HW and SW stack, and their insane supply chain.
Ford does not suddenly have several million cars with 8-9 cameras to tap into for training data, nor does it have the infrastructure/talent to train models with the data it may get. I think you are underselling the Tesla moat.
Its the same reason why there are only 3-4 "frontier" AI labs, and the rest are just playing catchup, despite a lot of LLM improvements being pretty well researched and open in papers.
Historically every technological recolution serves to replace some facet of human labor (usually with the incentive of squeezing profits as technology gets cheaper over time, but wages do not).
Industrial revolution == automate non dexterous manual labor
Information age == automate "computational"/numerical thinking
AI == automate thinking
Robotics + AI == automate dexterous manual labor