Answer: Any job where the majority (or all) of your work can be done strictly by using a computer, and for tasks that have easily verifiable and objective outcomes. And from an economic perspective, jobs that have the highest cost (i.e, highest margins for AI companies to replace) have a strong economic incentive to be automated first. So Software, Finance, Accounting, Law, etc.
Yes - this means software engineers are likely the first to go, along with other high paying computer jobs.
One thing that irks me about this place is the great-confidence people make claims, when they have zero idea about stuff outside of their domain.
I know ten people who work across Accounting and Finance in high-level positions who have all told me that in the past few months, the LLM-steam has wore off and they aren't seeing any material benefits.
yeah and 2s has not been doing too hot for a few years now. Jane street I buy - they tend to recruit a lot of CMU students. But definitely less than < 15 of the new grads they hire each year are from CMU. They maybe hire on the order of 50-100 new grad SWEs a year.
It will probably be a lot worse since white collar workers (especially the ones that AI is targeting, like banking, software, etc since they are super high margin jobs to automate) traditionally make and spend more than the average worker.
These are the people getting mortgages and sending kids to private school and whatnot. If their spending power suddenly drops to 0, its probably going to be pretty bad. I wonder what the housing market would look like in these cases.
I agree. I think most companies would be better off being 100% AI driven since synchronization problems for agents (or whatever the fad will be) is likely much lower than human social synchronization, and has more rich information transfer between "workers" (so less ambiguity, less tradeoffs to be made, etc).
As soon as a person enters the loop you add a manual sync point that probably doesn't need to be there. I think this is why you are increasingly seeing companies tell their people to be "on the loop" or "out of the loop" with their AI. The less syncing with a person, the better. And I think once this experiment runs its course, we will probably find out that human social interaction matters much less than we thought it did, especially for super transactional things like a corporate job where most of your work is done on a computer.
> ... Just that it doesn’t replace the social, human, and relationship based aspects of work, whether this is trust, or just being interested in what someone else says.
Yeah I also don't buy this. Most white collar work _seemingly_ necessitates trust, social/human aspects, etc. because we _have_ to interact with other humans, and the way we interact with each other is lossy and often has misaligned or not explicitly stated motivations.
In other words, most white collar work _seems_ bottlenecked on people-centric things because we have imperfect information about what other people want, so we have to use soft skills (i.e, skills only real humans have) to actually figure out motivations of various stakeholders and align expectations, garner favor, etc. amongst all of them. In a world where most of the workforce is AI, I think this problem of tacit information gets largely solved, since AIs can in theory, convey their intent and losslessly send information to one another without the need to waste time "aligning."
The other thing that people argue, especially in software, is that architecture and tradeoff decisions will remain in the human realm, because apparently only people have the "taste" to pick and chose the right solutions. I also think that:
(1) this will be easily solved by AI/current LLMs, since logically there shouldn't be a big difference between designing and writing good code to designing good systems architecture, and LLMs are ostensibly already good at coding
(2) "taste" and "tradeoffs" are things that, if you had more information (once again, if you could convey most or all necessary information losslessly between everyone in your org), things that appeared to be "tradeoffs" before might just be binary solutions.
Also just practically speaking, the stated goal of AI companies is to automate all labor. They won't just sit back happily collecting checks if there are parts of the human parts of the economy which they can't automate, that's revenue that they could easily capture. Whatever people claim AI lacks today will just be added to it in 6 months, AI companies are strongly incentivized to work towards this.
And at the end of the day, work is a transaction between employees and employers. A company's primary purpose is to generate money for shareholders, and human labor is just how it gets done. It doesn't matter if I _want_ to talk to a nice coworker instead of Claude 4.6 opus. If Claude costs less than my nice worker and has the same or better output, the company will happily replace that coworker with Claude because its strictly beneficial for the company.
1. ai being able to code well seems like it would also get pretty close/good at doing basically everything else you described. If coding is a game of reasoning, if you can solve that, you have effectively solved reasoning and you can likely map it to most other problems provided you have a sufficiently good harness and toolcalling setup.
2. Lets assume AI won't replace everyone as point (1) assumes - and it just replaces _most_ people. Under this assumption, we will likely see large swathes of layoffs. Many SaaS companies have a pay per seat model. Less people employed at companies = less seats being paid for = less SaaS revenue.
So not only is there a threat of companies just vibe coding various SaaS-es in house, but there is also a threat that the TAM of many SaaS products (which is typically proportional to the # of employees there are) will actually _shrink_ in size.
I think the main class of SaaS company that will remain in the medium term are the ones in legally touchy or compliance heavy industries - think healthcare, finance and security (workday for example). But even Workday will be affected by point (2) from above. Overall, I think the mid-long term outlook for SaaS, especially "SaaS", is not great.
Yes, it will replace human thinking. Thats quite literally the explicit goal of every AI company.
Historically every technological recolution serves to replace some facet of human labor (usually with the incentive of squeezing profits as technology gets cheaper over time, but wages do not).
Industrial revolution == automate non dexterous manual labor
Information age == automate "computational"/numerical thinking
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
Yes - this means software engineers are likely the first to go, along with other high paying computer jobs.
reply