> there is nothing about the training process of these models that would encourage them to make the output of any layer apart from (n-1) meaningful as the input of layer n
There is something that does exactly that - the residual connections. Each layer adds a delta to it, but that means they share a common space. There are papers showing the correlation across layers, of course it is not uniform across depth, but consecutive layers tend to be correlated.
I did a similar system myself, then I run evals on it and found that the planning ceremony is mostly useless, claude can deal with simple prose, item lists, checkbox todos, anything works. The agent won't be a better coder for how you deliver your intent.
But what makes a difference is running plan review and work review agent, they fix issues before and after work. Both pull their weight but the most surprising is the plan-review one. The work review judge reliably finds bugs to fix, but not as surprising in its insights. But they should run from separate subagents not main one because they need a fresh perspective.
Other things that matter are 1. testing enforcement, 2. cross task project memory. My implementation for memory is a combination of capturing user messages with a hook, append only log, and keeping a compressed memory state of the project, which gets read before work and updated after each task.
Workflow matters too, how you organize your docs, work tasks, reviews. If you do it all by hand you spend a lot of time manually enforcing a process that can be automated.
I think task files with checkable gates are a very interesting animal - they carry intent, plan, work and reviews, at the end of work can become docs. Can be executed, but also passed as value, and reflect on themselves - so they sport homoiconicity and reflexion.
You can differentiate by context, one sees the work session, the other sees just the code. Same model, but different perspectives. Or by model, there are at least 7 decent models between the top 3 providers.
I know, but none of those is nearly as much of a difference as another human looking at code. The top models have such overlapping training data they sometimes identify as each other.
If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
If existing capital starts to generate excessive profits, more capital will be built, which will require human labor and will make the original capital less valuable.
In theory. In practice, the excessive capital of the incumbent allows them to price out or buy the budding competition, or the legislators, so as to protect their position.
The natural state of a capitalist system is the monopoly.
If AI being a million billion zillion times more productive at doing bullshit jobs nets in very little economic gain, then that lays bare the net economic value of all our bullshit jobs.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
> lays bare the net economic value of all our bullshit jobs.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
> In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
I think it's gonna mirror how the white collar classes, coastal elites, professional managerial class, whatever you want to call them, sold the countries industrial base to the far east. They got a little bit of money out of it but the biggest gains were the material wealth. $1 widgets instead of $2 widgets. All the people who weren't hurt by it got to live with more material plenty. Of course the nominal values of things didn't go down, but that's just inflation which is somewhat separate of an effect.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Does the work you do provide more or less value to the company than your salary? Where does the difference go? If your killer feature closes a $5M deal, who gets that money?
We live as capitalist serfs. Someone else gets all the value you create, and you should be grateful for the peanuts they toss back to you.
> If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
The study doesn't say it went into the 1%'s pockets. It says it went to 2 places:
1) The salaries of corporate employees
2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
#2 only works if the public is allowed to invest when the new technology is in its early stages, which is currently not the case. Microsoft went public in 1986 at a valuation of $2.3 billion (in today's dollars). What's OpenAI / Anthropic going to be worth by the time they IPO? $1 trillion? $2 trillion?
> Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
Yes, but shares are not at all uniformly distributed. Tim Cook owns 3.28 million shares of AAPL. For comparison, the 50 million Vanguard customers have to divide 1.3 billion shares amongst them, averaging about 26 shares of AAPL each.
> And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
The majority of those end up getting bought by larger software companies.
Overall capital ownership is increasingly concentrated among a small number of elites.
I think getting into the weeds on whether $80k or $100k or $120k/yr is a middle class sort of misses the point, but at least with my eyes it is hard to argue you're middle class if you're making more than about $150k at the most.
Even the GP, which I directionally agree with, says "upper-middle class is people making ~$200k/yr" but you're deep into the top quintile by that point, probably top 10%. I don't know what percentile I consider "upper middle" but it's definitely lower than top 10%.
A good indicator that someone is simply being dogmatic and not arguing in good faith (e.g. actually trying to understand someone's POV, and being open to being proven wrong in their assumptions) is when it takes them 5-20 minutes to reply until a particularly good point is made and then they disappear into the ether.
I didn't say A means B 100% of the time, just that it's an indicator. The same way having a car with a lot of dings and scratches and holes doesn't mean you're a bad driver, you could have just purchased it that way or they could have happened through no fault of your own. But it's still an indicator - one piece of evidence to be looked at holistically.
But I notice you haven't actually responded to the point that "the 1%" actually does mean "your neighbor with a nice house and a pool" because $800k income puts you in the 1%. That's two doctors in not-particularly-highly-paid specialties. That's two good attorneys at big firms. That's two FAANG software engineers. Definitionally there's not going to be a ton of people in that group, it's hard to get into that group, but they're everywhere not just billionaires twirling their mustache and trying to reroute rivers to sell the water like some caricature.
The problem with social media is precisely the platform, it ranks what keeps people addicted, seeing more ads. Creators conform to the Algorithm and produce slop to capture some of that scarce attention. Nobody cares about users. Same shit happens on Google Search, YouTube, Amazon Search, Google Store, App Store... all platforms produce shitty feeds and search results. And before them we had TV and newspapers as slop making platforms.
Ahh yes, the “It’s always been this way” argument. I was wondering if it was going to show its ugly head.
The difference now is ANYONE can become a TV station. A newspaper. A radio talk show. While I’m all for allowing anyone to do anything, I’m also a fan of curation and quality over quantity. Social media has no value. Because it values nothing.
reply