Also, not just followers. There’s a kinda “merchant” behaviour too I think … signalling and trading in hype perspectives.
But to be fair, I’m not sure what the average dev/eng is supposed to do against a climate of regular change, many disparate opinionated groups with disparate tech stacks, and, IMO a pretty pure engineering culture of actually weighing the value of tech/methods against relevant constraints and trade offs.
Indeed, likely a useful lens on the current moment I’d say.
For better/worse, and whether completely so or not, the time of the professional keyboard-driven mechanical logic problem solver may simply have just come and gone in ~4 generations (70 years?).
By 2050 it may be more or less as niche as it was in 1950??
Personally, I find the relative lack of awareness and attention on the human aspect of it all a bit disappointing. Being caught in the tides of history is a thing, and can be a tough experience, worthy of discourse. And causing and even forcing these tides isn’t necessarily a desirable thing, maybe?
Beyond that, mapping out the different spaces that are brought to light with such movements (eg, the various sets of values that may drive one and the various ways that may be applied to different realities) would also certainly be valuable.
For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.
Agreed. And I feel it fair to argue that this is the intended interface between proprietary software and its users, categorically.
And more so with AI software/tools, and IMO frighteningly so.
I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.
All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?
Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.
Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).
It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.
> we aim for a computing system that is fully visible and understandable top-to-bottom — as simple, transparent, trustable, and non-magical as possible. When it works, you learn how it works. When it doesn’t work, you can see why. Because everyone is familiar with the internals, they can be changed and adapted for immediate needs, on the fly, in group discussion.
Funny for me, as this is basically my principal problem with AI as a tool.
It’s likely very aesthetic or experiential, but for me, it’s strong: a fundamental value of wanting to work to make the system and the work transparent, shared/sharable and collaborative.
Always liked B Victor a great deal, so it wasn’t surprising, but it was satisfying to see alignment on this.
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
To be fair, LLMs are just the most recent step in a long road of doing the same thing.
At any point of progress in history you can look backwards and forwards and the world is different.
Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )
The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.
So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.
Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.
So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.
The difference between GenAI and your examples is a theft component.
They stole our data - your data - and used it to build a machine that diverts wealth to the rich. The only equitable way for GenAI to move forward is if we all own a share of it, since it would not exist in its current form without our data. GenAI should be a Universal Basic Asset.
you can, perhaps. I am 2 years laid off and am trying to pay rent. All according to plan, I suppose.
Also, no. I would not invest in this hype bubble. We're definitely getting an AI crash within the next 5-7 yeras, a la the dotcom crash. I prefer safer stocks if I have the choice.
You realize that not everybody has the means to invest in stocks, right? Artists are so commonly poor, that there's even a trope called the "starving artist." I have noticed a distinct lack of empathy in the broader discussion about GenAI's impact on the working class. The argument is always made that this isn't new, that people have to retrain when new technology displaces them. Ok, sure. But the speed of this displacement is very new and it happened basically overnight. How do you expect these displaced people to sustain themselves during the retraining period? There's only so many McJobs, and it's not as easy as you think to get one right now. I just watched someone with a college degree apply to everything for 2 months before landing one. There's also the deeply-held belief that people are only valuable if they work, which I think many of us subconsciously believe, but that's pretty messed up if you reflect on it and follow it to its logical conclusion.
There's also the principle of the matter that we shouldn't have to pay for a share of something that was built using our collective unpaid labor/property without our consent.
More subtly, I'll modity the dictorial context to require payment to any sources an AI uses, and strong enforcement of infringements on AI. The core problem with capitalistic society is that money tends to bubble up to the top and then stay there. The goal of regulation should partly be to make sure that money is incentivized to be not stay up top in stocks.
I appreciate the idealism but your argument has some flaws.
Firstly the "theft component" isn't exactly new. There have always been rich and poor.
Secondly everyone is standing on the shoulders of giants. The Beatles were influenced by the works of others. Paul and John learned to write by mimicking other writers.
That code you right is the pinicle of endless work dine by others. By Ada Lovelace, and Charles Babbage, and Alan Turing and Brian Kernigan and Denis Ritchie and Doug Englebart and thousands and thousands more.
By your logic the entire output of all industries for all foreseeable generations should be universally owned. [1]
But that's not the direction we have built society on. Rather society has evolved in the US to reward those who create value out of the common space. The oil in Texas doesn't belong to all Texans, it doesn't belong to the pump maker, it belongs to the company that pumps the oil.
Equally there's no such thing as 'your data'. It's your choice to publish or not. Information cannot be 'owned'. Works can be copyrighted, but frankly you have a bigger argument on that front going after Google (and Google Books, not to mention the Internet Archive) than AI. AI may train on data you produced, but it does not copy it.
[1] I'm actually for a basic income model, we don't need everyone working all day like it's 1900 anymore. That means more taxes on companies and the ultra wealthy. Apparently voters disagree as they continue to vote for people who prefer the opposite.
I think your last point is very reductionist. Nearly every country ends up in a voting situation where only 2 parties can realistically win. A diverse parlament results in paralysis and the fall of government (happened in my home country multiple times).
The two parties that end up viable tend to be financed quite heavily by said wealthy, including being proped by the media said wealthy control.
The more right wing side will promise tax cuts (also for the poor that don't seem to materialize) while the more left wing side will promise to tax the rich (but in an easily dodgeable way that only ends up affecting the middle class).
Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one. And I think the wealthy backers wanted it that way.
Im not sure why you are being downvoted. You make a reasonable argument.
I would contest some of your points though.
Firstly, not every country votes, not all that vote have 2 viable parties, so that's a flaw in your argument.
Equally most elections produce a winner. That winner can, and does, get stuff done. The US is paralyzed because it takes 60% to win the senate, which hasn't happened for a while. So US elections are set up so "no one wins". Which of course leads to overreach etc that we're seeing currently.
There's a danger when living inside a system that you assume everywhere else is the same. There's a danger when you live in a system that heavily propagandizes its own superiority, that you start to feel like everywhere else is worse.
If we are the best, and this system is the best, and it's terrible, then clearly all hope is lost.
But what I maybe, just maybe, all those things you absolutely, positively, know to be true, are not true? Is that even worth thinking about?
But I know people whose preference would be something like Ron Paul > Bernie Sanders > Trump > Kamala, which might sound utterly bizarre until you realize that there are multiple factors at play and "we want tax cuts for the rich" is not one of them.
When you vote for a guy who plans to raise prices, when you vote for a guy who already tried to remove Healthcare, when you vote for a guy who gives tax breaks to the rich, when you vote for a guy who is a grifter, then don't complain when you get what you voted for.
People are welcome to whatever preference they like. Democracy let's them choose. But US democracy is deliberately planned to prefer the "no one wins" scenario. That's not the democracy most of the world uses.
> Nearly every country ends up in a voting situation where only 2 parties can realistically win.
Not necessarily. That's a result of first past the post, not of voting in general. ranked choice voting solves a lot of this extremism 2 party system. The dominant parties need to at least pretend to appel enough to moderatism that a 3rd party isn't outvoting both of them.
>Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one.
So the right wingers never really cared about inflation, egg prices, and the job market. I wish I could pretend to be shocked at this point.
> Not necessarily. That's a result of first past the post, not of voting in general. ranked choice voting solves a lot of this extremism 2 party system. The dominant parties need to at least pretend to appel enough to moderatism that a 3rd party isn't outvoting both of them.
Yup, we really need to fix this problem in many countries. Ranked choice is a great idea that should be pushed for.
> So the right wingers never really cared about inflation, egg prices, and the job market. I wish I could pretend to be shocked at this point.
That was my perception of it at least. I'm not a US citizen. Job market might have been a big one but even that is partially social as a rejection of globalism.
The newness or novelty of thievery isn't relevant to whether it's thievery or not.
The difference is that, for better or worse, our society chose to follow the model that artists own the rights to their work. That work was used for commercial purposes without the consent of the artists. Therefore it's theft.
I actually do believe all industries should be worker owned because the capitalists have proven they can't be trusted as moral and ethical stewards, but I'm specifically talking about GenAI here.
I think it's disingenuous to say that people have a choice to publish data or not in an economic system that requires them to publish or produce in order to survive. If an artist doesn't produce goods, then they aren't getting paid.
Also this is kind of a pedantic rebuttal but the GenAI software technically does first have to copy the data to then train on it :) But seriously, it can be prompted to reproduce copyrighted works and I don't think the rights holders particularly care how that happens, rather that it can and does happen at all.
There isn't any more theft in this than in artists copying the styles and techniques of popular artists to improve their craft.
This is 100% just the mechanization of a cultural refinement process that has been going on since the dawn of civilization.
I agree with you regarding how the bounty of GenAI is distributed. The value of these GenAI systems is derived far more from the culture they consume than the craft involved in building them. The problem isn't theft of data, but a capitalist culture that normalizes distribution of benefit in society towards those that are already well off. If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
I disagree that artists copying styles is theft. Tracing or lifting exact elements without materially modifying them, sure, but studying and translating another artist's style takes effort and intent.
I agree with your overall point of wealth distribution but I don't think that excuses the data theft component of GenAI. It would still matter morally and ethically even if the financial aspect of it was solved. It's about consent.
>Than in artists copying the styles and techniques of popular artists to improve their craft.
We have not achieved GAI yet, so comparing the human mind to what's ultimately a robotic database is one ultimately made on a flimsy premise. AI isn't generating a style anymore than a user bashing 3 templates together.
Even when we hit GAI, we have different issues. a brain can't perfectly recite a song they hear. It will not objectively interpret the same soundwaves from brain to brain. It will not react the same way from brain to brain due to different experiences and perspectives. What GAI develops into may or may not take all these into account.
>If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
Sure. We can also make sure they pay the artists being copied frmo while we tax them more too. Let's not dismiss theft by casting off the theft as magic. This isn't Now you see me...
The scare for most people is that AI isn't better tools, but outsourced work. In the past we would create our own products, now other countries do this. In the past we did our own thinking and creative activities, now LLMs will.
If we don't have something better to do we'll all be at home doing nothing. We all need jobs to afford living, and already today many have bullshit jobs. Are we going to a world where 99.9% of the people need a bullshit job just to survive?
Personally I think your basic premise is false, hence your conclusion is false.
>> We all need jobs to afford living
In many countries this is already not true. There is already enough wealth that there is enough for everyone.
Yes, the western mindset is kinda "you don't work, you don't get paid". The idea that people can "free load" on the system is offensive at a really deep emotional level. If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
As jobs become ever more productive, perhaps just -considering- a world where worth is not linked to output is a useful thought exercise.
No country has figured this out perfectly yet. Norway is pretty close. Lots of Europe has endless unemployment benefits. Yes, there's still progress to be made there.
[1] of course, even in the US, already it's OK for only a 3rd to work. Children don't work. Neither do retirees. Both essentially live off the labor of those in-between. But imagine if we keep raising the start-working age, while reducing retirement benefits age....
Sounds great in theory, but doesn't seem very realistic. There will always be people that want power over other people, and having more than others will give them that power.
And universally, if you have nothing, you lead a very poor life. You life in a minimal house (trailer park, slums, or housing without running water nor working sewage). You don't have a car, you can't travel, education opportunities are limited.
Most kids want to become independent, so they have control over their spending and power over their own lives. Poor retirees are unhappy, sometimes even have to keep working to afford living.
Norway is close because they have oil to sell, but if no one can afford to buy oil, and they can't afford to buy cars, nor products made with oil, Norway will soon run out of money.
You can wonder, why is Russia attacking Ukraine, russia has enough land, doesn't need more. But in the end there will always be people motivated by more power and money, which makes it impossible to create this communism 2.0 that you're describing.
You have equated a basic income with equality. That's a misunderstanding.
I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
Actually Norway gets most of that from investments, not oil. They did sell oil, but invested that income into other things. The sovereign wealth fund now pays out to all citizens in a sustainable way.
Equally your understanding of dole living in Europe is incomplete. A person on the dole in the UK is perfectly able yo live in a house with running water etc. I know people who are.
Creating a base does not mean "no one works". Lots of people in Europe have a job despite unemployment money. And yes most-all jobs pay better than unemployment. And yes lifestyles are not equal. It's not really communism (as you understand it.)
This is not about equal power or equal wealth. It's about the idea that a job should not be linked to survival.
Why is 60 the retirement age? Why not 25? That sounds like a daft question, but understanding it can help understand how dome things that seem cast in stone, really aren't.
In live in europe, so understand some of it, part of my family comes from eastern europe, so have also seen that form of communism in the past.
Living on welfare in the Netherlands is not a good life, and definitely not something we should accept for the majority of the people.
Being retired on only a state pension is a bad life, you need to save for retirement to have a good life. And saving takes time, that's why you can't retire at 25.
I'm saying that the blind acceptance of the status quo does not allow for that status to be questioned.
You see the welfare amounts, or retirement amounts as limited. Well then, what would it take to increase them? How could a society increase productivity such that more could be produced in less time?
Are some of our mindsets preventing us from seeing alternatives?
Given that society has reinvented itself many times through history, are more reinvention possible?
>Are some of our mindsets preventing us from seeing alternatives?
no, just corporate greed and political corruption. If we wanna change that, words won't do at this point.
>Given that society has reinvented itself many times through history, are more reinvention possible?
Yes, and through what catalyst has society reinvented itself? Reasonable discourse to a civil population appealing to emotion and reason? A sudden burst of altruism to try and cement a positive legacy?
It will reinvent itself eventually. Definitely in my lifetime. I don't know how many of us will survive to see the other side.
>If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
Sure, I'd be down for it. But I think that's less realistic and instead the government will make my country try to make for a rise of feudalism instead. Except most will starve. it will make the great depression seem quaint in comparison.
>And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
I'd love to know the answer too. I think we both know the true answer, though.
>I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
That's the issue. Even right now, we don't get enough even if you do work full time. Living is unsustainable. How is the problem going to get better, especially when those who would have to pay will instead lobby to not pay out to the people?
Assuming that's going to happen (outsourcing), what's wrong with that?
If you're a nationalist, your worry is obvious enough, but if you're a humanist, then it's wonderful that the more downtrodden are going to improve their station, while the better off wait for them.
Read the news, you’ll see what’s wrong. The people that lost their job to outsourcing will be unhappy, and very susceptible to easy “solutions” from populists like blaming foreigners. And from there you go to unlawful deportations, and a very polarized country where some have everything and others have nothing due to no longer having a job. And when you have no job, and no money, but still have to pay the bills, for some crime seems a solution. So as a country you need to spend more on crime fighting, where that same money could have gone to education, but that’s not the solution people like, because “why should I pay for someone else’s education?” At the same time they’re overlooking the fact that education mostly has a positive ROI, as more education usually results in a better economy in a country, whereas crime fighting has a negative ROI for tax money spend.
That's not humanism, that sounds closer to utilitarianism. Humanism is about each person reaching their full potentil.
I don't think the outsourcees are reaching their full potential being paid $2/hr to make American corporations billions. They are simply going to survive and up themselves to a liveable standard.
Rings true for my impression too. In the end, she’s a YouTuber now, for better or worse, but still puts out what look like thoughtful and informative enough videos, whatever personal vendettas she holds grudges over.
I suspect for many who’ve touched the academic system, a popular voice that isn’t anti-intellectual or anti-expertise (or out to trumpet their personal theory), but critical of the status quo, would be viewed as a net positive.
> Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs.
Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …
Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?
And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?
> Surely it matters where the LLM sits against these values, no?
Yes, I agree, but it's all trade-offs. The core problem is this:
1. Software is very expensive to write
2. So, you need to sell to as many people as possible
3. So, you need to add lots of features to attract as many people as possible
4. And you need to monetize it with ads, data-selling, and SaaS subscriptions.
5. But that makes software complicated, brittle, and frustrating.
LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
But aren't we trading one master of another? Instead of bowing down to Microsoft/Meta/Google, we bow down to OpenAI/Anthropic/Meta/Google? Maybe, but when an LLM writes code for you, you own the code. The code runs outside of the LLM (usually) on an open platform.
But what if you have to modify the code? Then you ask an LLM (maybe not the original LLM) to modify the code. That's far easier than asking Google to modify Gmail.
If you believe in the suggestions of the author, then I don't think there is a better answer than LLMs. We don't live in a world where everyone can solve their software problems by forking some code, much less modifying it themselves.
And the reason I think it's ironic is because I suspect the author hates LLMs.
I disagree with this, right at the start. I think software is cheap to write but expensive to maintain when you try to sell to as many people as possible. It's the OpEx that kills you, not the CapEx. I go into this more in the current state of https://akkartik.name/about
So I wrote OP to encourage more exploration of the alternative path. If you build something and don't keep adding features to it in a futile attempt at land-grabbing "users" who will for the most part fail to pay you back for the over-investment your current VC-based milieu causes you to think is the only way to feel a sense of meaning from creating software -- if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive.
I plan to just put small durable things out into the world, and to take a small measure of satisfaction in their accumulation over the course of my life. The ideal is a rock: it just sits inert until you pick it up, and it remains true to its nature when you do pick it up.
> LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
That's the critical question, isn't it. Will LLMs yield custom software that does exactly what you need and stabilizes? Or will they addict people to needing to endlessly tweak their output so AI companies can juice their revenue streams?
What skills does it take to nudge an LLM to create something durable for you? How much do people need to know, what skills do they need to develop? I don't know, but I feel certain that we will need new skills most people don't currently have.
Another way to rephrase the critical question: do you trust the real AIs here, the tech companies selling LLMs to you. Will the LLMs they peddle continue to work in 10 years time as well as they do today? If they enshittify, will you be prepared? Me, I'm deeply cynical about these companies even as LLMs themselves feel like a radical advance. I hope the world will not suffer from the value capture of AI companies the way it has suffered from the value capture of internet companies.
> I think software is cheap to write but expensive to maintain
OK, but I think you're agreeing with me. Regardless of why it is expensive, it drives companies to bloat their products (to increase their market) and to exploit dark patterns (to increase unit revenue).
If software were very cheap to create and maintain, then it would break that cycle.
> if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive
In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
And of course, creating and maintaining 10 different products is more expensive than 1 product with all the features.
> I plan to just put small durable things out into the world
This is great! Actions speak louder than words and you'll learn a lot in the process.
> Will LLMs yield custom software that does exactly what you need and stabilizes?
I agree that this is the critical question. No one knows (certainly I don't). But let's say the goal is to create custom software that does exactly what you need. Is there a practical path to that other than via LLMs? I don't think so.
> do you trust the real AIs here, the tech companies selling LLMs to you
I think this is orthogonal to whether the tech works at all. But, in general, yes, I trust most tech companies to provide value greater than the cost of their products. Pretty much by definition, for all the software I pay for, I trust the companies to deliver greater value. When that changes, I stop paying and switch.
And, of course, I support all the usual government regulators and public/private watchdogs to hold corporations accountable.
I think the differing stances towards tech companies might be the crucial axiomatic difference between our positions. I've just lived through 30 years of reduced regulation of Tech, and it's hard to imagine a world that reliably prevents that from recurring.
> In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
They were approaching this from the other side, though, of already having built a ton of features and then trying to fragment a unified market. It doesn't work because from Microsoft's perspective the goal of Excel is market control at the cheapest price, and giving each user their 10% is more expensive.
But if you shift perspective to the users of Excel, you don't need to care about market control. If everyone starts out focusing on just the 10% they care about, it might be tractable to just build that for themselves. The total cost in the market is greater, particularly because I'm not imagining everyone using the same 10% is banding together in a single fork. But that becomes this totally fake metric that nobody cares about.
My approach involves throwing an order of magnitude more attention at the problem than people currently devote to computing. But a single order of magnitude feels doable and positive ROI. If everyone needs to become a programmer, that's many orders of magnitude and likely negative ROI. That's not what I'm aiming for.
But to be fair, I’m not sure what the average dev/eng is supposed to do against a climate of regular change, many disparate opinionated groups with disparate tech stacks, and, IMO a pretty pure engineering culture of actually weighing the value of tech/methods against relevant constraints and trade offs.
reply