> AI research tools are increasingly useful if you're a competent researcher that can judge the output and detect BS.
This assumes we even need more Terence Taos by the time these kids are old enough. AI has gone from being completely useless to solving challening math problems in less than 5 years. That trajectory doesn't give me much hope that education will matter at all in a few years.
> Code quality is still a culture and prioritisation issue more than a tool issue.
AI helps people more that "write" (i.e. generate) low-quality code than people who write high-quality code. This means AI will lead to a larger percentage of new code being low-quality.
Yeah, I'm always confused why programmers seem to like this technology given the vast negative consequences it will likely have for us. The upsides on the other hand seem to be the most insignificant things.
> upsides on the other hand seem to be the most insignificant things
An abundance of intelligence on Earth with all its spoils: new medicine, energy, materials, technologies and new understandings and breakthroughs - these seem quite significant to me.
There is absolutely no guarantee that those things will happen just because Claude takes your job. Taking your job doesn't require super-intelligence, it doesn't even require human-level intelligence. It requires just enough intelligence to pump out mediocre code that sort of works and being way cheaper to run than your pay.
Super-intelligence is a completely different can of worms. But I'm not optimistic about super-intelligence either. It seems super naive to me to assume that the spoils of super-intelligence will be shared with the people who no longer can bring anything to the table. You aren't worth anything to the super-rich unless you can do something for them which the super-intelligence can't do.
There is absolutely no guarantee that Claude takes your job either. But if you believe so much in AI, investing in it is accessible to pretty much any pocket, you don't have to be rich to partake.
And when did "the rich" hoard anything for themselves only?! Usually I see them democratizing products and services so they are more accessible to everyone, not less.
Computers in my pocket and on my wrist, TVs as big as a wall and thin like a book, electric cars, flights to anywhere I dream of traveling, investing with a few clicks on my phone - all made possible to me by those evil and greedy rich in their race for riches. Thank you rich people!
You still need to be rich to partake. Most business ventures will still require capital even in the age of super-intelligence. Super-intelligence will make labor worthless (or very cheap) it won't make property worthless.
> And when did "the rich" hoard anything for themselves only?! Usually I see them democratizing products and services so they are more accessible to everyone, not less.
There are plenty of examples of rich people hoarding their wealth. Countries with natural resources often have poor citizens because those citizens are not needed to extract that wealth. There is little reason why super-intelligence will not lead to a resource curse where the resource is human intelligence or even human labor.
> Computers in my pocket and on my wrist, TVs as big as a wall and thin like a book, electric cars, flights to anywhere I dream of traveling, investing with a few clicks on a website - all made possible to me by those evil and greedy rich in their race for riches. Thank you rich people!
Those rich people didn't share with you out of the goodness of their heart but because it was their best strategy to become even richer. But that's no longer the case when you can be replaced by super-intelligence.
Again, you can invest, today, in AI stocks and ETFs, with just $100 and a Robinhood account. No need to be rich.
> Super-intelligence will make labor worthless (or very cheap) it won't make property worthless.
If the labor is worthless, the great majority of people will be poor. Due to the law of supply & demand, property will be worthless since there will be very little demand for it.
> Countries with natural resources often have poor citizens because those citizens are not needed to extract that wealth.
Countries with or without resources often have poor citizens simply because being poor is the natural state of mankind. The only system that, historically, allowed the greatest number of people to exit poverty is capitalism. Here in Eastern Europe we got to witness an astonishing change of fortunes when we switched from communism to capitalism. The country and its resources didn't change, just the system and, correspondingly, the wealth of the population.
> it was their best strategy to become even richer. But that's no longer the case when you can be replaced by super-intelligence.
How can they become richer when most people are dirt broke (because they were replaced by AIs) and thus can't buy their products and services? Look at how even Elon's fortunes shrink when his company misses a sales forecast. He is only as rich as the number of customers he can find for his cars.
> Again, you can invest, today, in AI stocks and ETFs, with just $100 and a Robinhood account. No need to be rich.
And then? I'll compensate the loss of thousands of dollars I don't earn anymore every month with the profits of a $100 investment in some ETF?
> If the labor is worthless, the great majority of people will be poor. Due to the law of supply & demand, property will be worthless since there will be very little demand for it.
Property has inherent value. A house I can live in. A farm can feed me. A golf course I can play golf on. These things have value even if nobody can buy them off me (because they don't have anything I want). Supply and demand determine only the _price_ not the _value_ of goods and services.
> Countries with or without resources often have poor citizens simply because being poor is the natural state of mankind. The only system that, historically, allowed the greatest number of people to exit poverty is capitalism. Here in Eastern Europe we got to witness an astonishing change of fortunes when we switched from communism to capitalism. The country and its resources didn't change, just the system and, correspondingly, the wealth of the population.
None of this has any connection to anything I've written. I'm talking about the concept of a resource curse. Countries rich in natural resources (oil, diamonds, ...) where the population is poor as dirt because the ruling class has no incentive to share any of the profits. The same can happen with AI if we don't do anything about it.
> How can they become richer when most people are dirt broke (because they were replaced by AIs) and thus can't buy their products and services?
Other rich people can buy their products and services. They don't need you to buy their products and services because you don't bring anything to the table because all you have is labor and labor isn't worth anything (or at least not enough to survive off it). Put differently: Why do you think rich people would like to buy your labor if using AI/robots is cheaper? What reason would they have to do that?
> Look at how even Elon's fortunes shrink when his company misses a sales forecast. He is only as rich as the number of customers he can find for his cars.
You're proving my point: Elon still lives in a world where labor is worth something. Because Elon lives in a world where labor is worth something it is in his interest that there are many people capable of providing that labor to him. This means it is in his interest that the general population has access to food and water, is well eduacated, ...
If Elon were to live in a world where labor is done by AI/robots there would be little reason for him to care. Yes, he couldn't sell his cars to the average person anymore, but he wouldn't want to anyway. He could still sell his cars to Altman in exchange for an LLM that strokes his ego or whatever rich people want.
The point is: Because rich and powerful people still have to pay for labor, their incentives are at least somewhat aligned with the incentives of the average person.
> And then? I'll compensate the loss of thousands of dollars I don't earn anymore every month with the profits of a $100 investment in some ETF?
Probably most of it at least, because under your supposition that the AGI will replace labor we'll get incredibly cheap products and services as a result.
> Property has inherent value.
You weren't talking about inherent value when you wrote "Super-intelligence will make labor worthless (or very cheap) it won't make property worthless." which is what I replied to.
> None of this has any connection to anything I've written. I'm talking about the concept of a resource curse.
And my point was that the wealth of a nation does not come from its resources but its entrepreneurs. Resources are a course usually when monopolized and administrated (looted) by corrupt governments, not when exploited by private entities. AIs controlled by governments would scare me indeed.
> Other rich people can buy their products and services.
> He could still sell his cars to Altman
Are you joking?! How many cars do you think Altman can buy?! Do you really think the rich people can be an actual market?! How many rich people do you think there are out there?! Are you talking about middle class by any chance?
> Why do you think rich people would like to buy your labor if using AI/robots is cheaper?
Because labor evolves too, just like it evolved when automation, IT and outsourcing came around. Yes, I can't sell my dirt digging services in the age of digging machines but I can learn to drive one and sell my services as a driver. Maybe I can't sell coding in the age of AI but I can sell my ability to understand, verify and control complex systems with code written by AIs.
And so on, you get the idea. Adaptation, creativity and innovation is the name of the game.
> You're proving my point
> The point is: Because rich and powerful people still have to pay for labor their incentives are at least somewhat aligned with the incentives of the average person
Not at all. My point was that Elon and rich people are interested in you as a customer, not for your labor. That is the old mindset and the one we need to evolve from. See yourself as selling and buying products and services, not your labor, and the world will be full of opportunities. "The rich" won't seem like a separate class from you, but regular people you can interact and profit from (while mutually benefiting).
> Probably most of it at least, because under your supposition that the AGI will replace labor we'll get incredibly cheap products and services as a result.
No, we will get cheap _labor_, not necessarily cheap _products_.
> You weren't talking about inherent value when you wrote "Super-intelligence will make labor worthless (or very cheap) it won't make property worthless." which is what I replied to.
I was talking about value, not price.
> AIs controlled by governments would scare me indeed.
What is the difference?
> Are you joking?! How many cars do you think Altman can buy?!
Why would Elon need to sell more cars? And for what exactly? You have nothing Elon wants.
> Maybe I can't sell coding in the age of AI but I can sell my ability to understand, verify and control complex systems with code written by AIs.
Unless the super-intelligence is better than you here too. Why wouldn't it be?
> Adaptation, creativity and innovation is the name of the game.
It is the name of the game until super-intelligence comes along which will be better at all of this than you. That's exactly the scary thing about super-intelligence.
> My point was that Elon and rich people are interested in you as a customer, not for your labor.
This is the same thing. I can only be a customer if I can bring something to the table that Elon wants from me. That thing is money. I can only bring money to the table if someone that has money needs something I can provide. That thing is human labor. If super-intelligence removes the economic value of human labor, I can no longer earn money and consequently Elon will not be interested in me as a customer.
> See yourself as selling and buying products and services, not your labor, and the world will be full of opportunities.
Where exactly is the difference between me "selling a service" and me selling "labor"?
> "The rich" won't seem like a separate class from you, but regular people you can interact and profit from (while mutually benefiting).
I doesn't matter whether or not you see the rich as a seperate class. What matters is simply the following:
People who own a lot of stuff, don't sell their labor and/or buy a lot of labor will profit if labor becomes cheap. People who don't own a lot of stuff, sell their labor and don't buy a lot of labor face an existential threat if labor becomes cheap.
I feel like you're fighting the fallacy of "the rich" being collectively blamed for every problem, by giving them credit for everything instead.
We know that none of the goods you listed would be available to the masses unless there was profit to be gained from them. That's the point.
I have a hard time believing a large group being motivated and mutually benefiting towards progression of x thing would result in worse outcomes than a few doing so. We just have never had an economic system that could offer that, so you assume the greedy motivations of a few is the only path towards progress.
> We just have never had an economic system that could offer that
Please propose it yourself.
> you assume the greedy motivations of a few is the only path towards progress
No. I assume the greedy motivations of the many is the best path towards progress. Any other attempts to replace this failed miserably. Ignoring human nature in ideologies never works.
That's extremely difficult. I just don't assume something is impossible because it hasn't been done yet. Especially when there is an active battle to undermine and destroy such ideas by almost every powerful entity on earth.
Literally none of what you just said is true. All of those things happened because there was a market opportunity, there was a market opportunity because wealth was not just in the hands of the rich.
If you want to look at what historically has happened when the rich have had a sudden rapid increase in intelligence and labor, we have examples.
After the end of the Punic wars, the influx of slave labor and diminution of economic power of normal Roman citizens lead to: accelerating concentration of wealth, civil war and an empire where the value of human life was so low that people were murdered in public for entertainment.
> All of those things happened because there was a market opportunity, there was a market opportunity because wealth was not just in the hands of the rich.
Yet those things did not happen in communist countries (or happened way less in socialist ones), during the same time period, even though the market was there too. That is why EU's socialist countries consume high tech products and services from the USA and not the other way around.
> abundance. For those that own stuff and no longer have to pay for other people to work for them.
Why are you saying that? Anybody working for a living (but saving money) can invest in AI stocks or ETFs and partake in that potential abundance. Robinhood accounts are free for all.
These two are already difficult or impossible for many people. Especially a big chunk of USAmericans have been living paycheck to paycheck for a long time now, often taking multiple jobs just to make ends meet.
And then to gamble it on a speculative market, whose value does not correlate with its performance (see e.g. Tesla, compare its sales / market share with its market value compared to other car manufacturers). That's bad advice. And for an individual, you'd only benefit a fraction from what the big shareholders etc earn. Investing is a secondary market.
> a big chunk of USAmericans have been living paycheck to paycheck
That doesn't mean they are poor, just poor with their money. Saving is a skill that needs to be learned.
> That's bad advice.
No. Not investing, when the S&P500 index had a 6% inflation-adjusted annual historical return over the last 100 years - is bad advice. Not hedging the arrival of an AGI that you think can replace you - is bad advice.
It’s not an utopian idea to try as a society to hedge and distribute returns to all members. From a human resource perspective, it is insane (ie. costly and inefficient) to put it on the shoulders of each individual. But that’s where we are (almost; happy to live in a country with still some public services like unified healthcare and retirement funds).
Yes, you can drill your own well to have water in a society, for some. Or, you come up with the unheard-of idea of public utilities, so people can simply open a tap and enjoy. In some regions, even drink it. Personally, growing up in a lucky place like that, I have a hard time imagining to ever live in a place that required me to buy bottled water.
Yes, you can demand each member of society to learn about ETFs. Personally, I enjoy every part of life where complexity is being dealt with for me; I wouldn’t survive a day without that collaboration.
We have a choice to design society, like we have a choice to design computer systems.
> It’s not an utopian idea to try as a society to hedge and distribute returns to all members.
It's not utopian, is downright dystopian. Redistribution means forceful confiscation (through taxation) from the most productive members of society. This in practice means punishing work, innovation and creation and rewarding laziness and low productivity. This will logically lead to a less productive societies that will fall behind and be either bought out or conquered by more successful, more aggressive societies. We've seen this scenario unfolding in the EU.
> I enjoy every part of life where complexity is being dealt with for me
Me too. But I want private companies dealing with that complexity, because market competition controls them and keeps them honest, unlike governments which are monopolies happy to give people free benefits and entitlements to buy themselves their next elections.
I also want participation in such schemes to be voluntary not compulsory since this keeps people responsible, aware and educated. Compulsory schemes are widely hated and rejected, even when being a net positive otherwise.
> public utilities
My water utility is a state-granted monopoly charging outrageous prices. Same for my electricity provider. I would love to quit them, but any competition was outlawed, of course.
> you can demand each member of society to learn about ETFs. Personally
If you have time, go to an online calculator and compute how much the compounded amount taken for pension from your salary every month would be worth today if invested in an S&P500 index ETF - then compare to your projected state pension. It was eye opening to me.
It seems we live in two almost separate only thinly connected realities (*). In such cases I’ve learned that while I wouldn’t mind deepening the exchange, for that to feel worthwhile for me it would have to happen in person and not online.
(*) my definition of reality includes not only observable “facts“ that we may agree on, but also our experience of it; our perception, judgments, values etc.
...which almost everybody agrees is a bubble, wondering not if, but when it will burst.
Investing in AI companies is just about the last piece of advice I'd give someone who's struggling financially.
The billionaires will largely be fine. They hedge their bets and have plenty of spare assets on the side. Little guy investors? Not so much. They'll go from worrying about their retirement plan to worrying about becoming homeless.
Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.
Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.
> Software engineers been automating our own work since we built the first assembler.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.
But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.
> Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
Syntax is not a gatekeeper function. It’s exactly the means to describe the precise systemic thinking. When you’re creating a program, you’re creating a DSL for multiple subsystem, which you then integrate.
The subsystem can be abstract, but we usually define good software by how closely fitted the subsystem are to the problem at hand, meaning adjustments only need slight code alterations.
So viewing syntax as a gatekeeper is like viewing sheet music as a gatekeeper for playing music, or numbers and arithmetic as a gatekeeper for accounting.
The difference is that human language is a much more information-dense, higher-level abstraction than code. I can say "an async function that accepts a byte array, throws an error if it's not a valid PNG image with a 1:1 aspect ratio and resolution >= 100x100, resizes it to 100x100, uploads it to the S3 bucket env.IMAGE_BUCKET with a UUID as the file name, and retries on failure with exponential backoff up to a maximum of 100 attempts", and you'll have a pretty good idea of what I'm describing despite the smaller number of characters than equivalent code.
I can't directly compile that into instructions which will make a CPU do the thing, but for the purposes of describing that component of a system, it's at about the right level of abstraction to reasonably encode the expected behavior. Aside from choosing specific libraries/APIs, there's not much remaining depth to get into without bikeshedding; the solution space is sufficiently narrow that any conforming implementation will be functionally interchangeable.
AI is just laying bare that the hard part of building a system has always been the logic, not the code per se. Hypothetically, one can imagine that the average developer in the future might one day think of programming language syntax in the same way that an average web developer today thinks of assembly. As silly as this may sound today, maybe certain types of introductory courses or bootcamps would even stop teaching code, and focus more on concepts, prompt engineering, and developing/deploying with agentic tooling.
I don't know how much learning syntax really gatekeeps the field in practice, but it is something extra that needs to be learned, where in theory that same time could be spent learning some other aspect of programming. More significant is the hurdle of actually implementing syntax; turning requirements into code might be cognitively simple given sufficiently baked requirements, but it is at minimum time-consuming manual labor which not everyone is in a position to easily afford.
I won't unless both you and I have a shared context which will tie each of these concept to a specific thing. You said "async function", and there's a lot of languages that don't have that concept. And what about the permissions of the s3 bucket, what's the initial time of the wait time? And what algorithm for the resizing? What if someone sent us a very big image (let say the maximum that the standard allows).
These are still logic questions that have not been addressed.
The thing is that general programming languages are general. We do have constructs like procedure/functions and class, that allows us for a more specialized notation, but that's a skill to acquire (like writing clear and informative text).
square(P) :- width(P, W), height(P, H), W is H.
validpng(P, X) :- a whole list of clauses that parses X and build up P, square(P).
resizepng(P) :- bigger(100,100, P), scale(100, 100, P).
smallpng(P, X) :- validpng(P, X), resizepng(P).
s3upload(P): env("IMAGE_BUCKET", B), s3_put(P, B, (exp_backoff(100))))
fn(X) :- smallpng(P, X), s3upload(P)
So what you've left is all the details. It's great if someone already have an library that already does the thing, and the functions has the same signature, but more often than not, there isn't something like that.
Code can be as highlevel as you want and very close to natural language. Where people spend time is the implementation of the lower level and dealing with all the failure modes.
Details like the language/stack and S3 configuration would presumably be somewhere else in the spec, not in the description of that particular function.
The fact that you're able to confidently take what I wrote and stretch it into pseudocode with zero deviation from my intended meaning proves my point.
To draft a spec like this, it would take more time and the same or more knowledge than to just write the code. And you still won’t have reliable results, without doing another lengthy pass to correct the generated code.
I can create a pseudocode because I know the relevant paradigm as well as how to design software. There’s no way you can have a novice draft pseudo-code like this because they can’t abstract well and discern intent behind abstractions.
I don't agree that it would take more time. Drafting detailed requirements like that to feed into coding agents is a big part of how I work nowadays, and the difference is night and day. I certainly didn't spend as much time typing that function description as I would have spent writing a functional version of it in any given language.
Collaborating with AI also speeds this up a lot. For example, it's much faster to have the AI write a code snippet involving a dependency/API and manually verify the code's correctness for inclusion in the spec than it is to read though documentation and write the same code by hand.
The feat of implementing that function based on my description is well within the capabilities of AI. Grok did it in under 30 seconds, and I don't see any obvious mistakes at first glance: https://grok.com/share/c2hhcmQtMw_fa68bae1-3436-404b-bf9e-09....
I don't have access to the grok sample you've shared (service not available in my region)
Reading the documentation is mostly for gotchas and understanding the subsystem you're going to incorporate in your software. You can not design something that will use GTK or sndio without understanding the core concepts of those technologies. And if you know the concepts, then I will say it's easier and faster to write the code than to write such specs.
As for finding samples, it's easy on the web. Especially with GitHub search. But these days, I often take a look at the source code of the library itself, because I often got questions that the documentation don't have the answer for. It's not about what the code I wrote may do (which is trivial to know) but what it cannot do at all.
Ah, weird, that's good to know. Well here's the code:
import { env } from './env';
import { v4 as uuidv4 } from 'uuid';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import sharp from 'sharp';
async function retry<T>(fn: () => Promise<T>, maxAttempts: number): Promise<T> {
let attempt = 1;
while (true) {
try {
return await fn();
} catch (error) {
if (attempt >= maxAttempts) {
throw error;
}
const delayMs = Math.pow(2, attempt - 1) * 100;
await new Promise((resolve) => setTimeout(resolve, delayMs));
attempt++;
}
}
}
export async function processAndUploadImage(s3: S3Client, imageData: Uint8Array): Promise<string> {
let metadata;
try {
metadata = await sharp(imageData).metadata();
} catch {
throw new Error('Invalid image');
}
if (metadata.format !== 'png') {
throw new Error('Not a PNG image');
}
if (!metadata.width || !metadata.height || metadata.width !== metadata.height || metadata.width < 100) {
throw new Error('Image must have a 1:1 aspect ratio and resolution >= 100x100');
}
const resizedBuffer = await sharp(imageData).resize(100, 100).toBuffer();
const key = `${uuidv4()}.png`;
const command = new PutObjectCommand({
Bucket: env.IMAGE_BUCKET,
Key: key,
Body: resizedBuffer,
ContentType: 'image/png',
});
await retry(async () => {
await s3.send(command);
}, 100);
return key;
}
The prompting was the same as above, with the stipulations that it use TypeScript, import `env` from `./env`, and take the S3 client as the first function argument.
You still need reference information of some sort in order to use any API for the first time. Knowing common Node.js AWS SDK functions offhand might not be unusual, but that's just one example. I often review source code of libraries before using them as well, which isn't in any way contradictory with involving AI in the development process.
From my perspective, using AI is just like having a bunch of interns on speed at my beck and call 24/7 who don't mind being micromanaged. Maybe I'd prefer the end result of building the thing 100% solo if I had an infinite amount of time to do so, but given that time is scarce, vastly expanding the resources available to me in exchange for relinquishing some control over low-priority details is a fair trade. I'd rather deliver a better product with some quirks under the hood than let my (fast, but still human) coding speed be the bottleneck on what gets built. The AI may not write every last detail exactly the way I would, but neither do other humans.
As I’m saying, for pure samples and pseudo code demo, it can be fast enough. But why bring in the whole s3 library if you’re going to use one single endpoint? I’ve checked npmjs and sharp is still in beta mode (if they’re using semver). Also, the code is parsing the imagedata twice.
I’m not saying that I write flawless code, but I’m more for less feature and better code. I’ve battled code where people would add big libraries just to not write ten lines of code. And then can’t reason when a snippet fails because it’s unreliable code into unreliable code. And then after a few months, you got zombie code in the project. And the same thing implemented multiple times in a slightly different way each time. These are pitfalls that occur when you don’t have an holistic view of the project.
I’ve never found coding speed to be an issue. The only time when my coding is slow is when I’m rewriting some legacy code and pausing every two lines to decipher the intent with no documentation.
But I do use advanced editing tools. Coding speed is very much not a bottleneck in Emacs. And I had a somewhat similar config for Vim. Things like quick access to docs, quick navigation (thing like running a lint program and then navigating directly to each error), quick commit, quick blaming and time traveling through the code history,…
> But why bring in the whole s3 library if you’re going to use one single endpoint?
This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.
Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.
> Coding speed is very much not a bottleneck in Emacs.
Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.
Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.
> I’m more for less feature and better code.
Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.
IMO, useful code is code in production (or if it’s for myself, something I can run reliably). Anything else is experimentation. If you’re working in a team, code shared with others are proposal/demo level.
Experimentation is nice for learning purpose. Kinda like scratch notes and manuscripts in the writing process. But then, it’s the editing phase when you’re stamping out bugs, with tools like static analysis, automated testing, and manual qa. The whole goal is to have the feature in the hand of the users. Then there’s the errata phase for errors that have slipped trough.
But the thing is code is just a static representation of a very dynamic medium, the process. And a process have a lot of layers. The code is usually a small part of the whole. For the whole thing to be consistent, parts need to be consistent with each other, and that’s when contract cames into place.The thing with generated AI code is that they don’t respect contracts. Because of their nature (non deterministic) and the fact that the code (which is the most faithful representation of the contracts can be contradictory (which leads to bugs).
It’s very easy to write optimistic code. But as the contracts (or constraints) in the system grew in number, they can be tricky to balance. The rescourse is always to go up a level in abstraction. Make the subsystems blackboxes and consider only their interactions. This assumes that the subsystems are consistent in themselves.
Code is not the lowest level of abstraction, but it’s often correct to assume that the language itself is consistent. Then it’s the libraries and the quality varies. Then it’s the framework and often it’s all good until it’s not. Then it’s your code and that’s very much a mistery.
All of this to say that writing code is the same as writing words on a manuscript to produce a book. It’s useful but only if it’s part of the final product or help in creating it. Especially if it’s not increasing the technical debt exponentially.
I don’t work with AI tools because by the time I’m ok with the result, more time have been spent than if I’ve done the thing without. And the process is not even enjoyable.
Of course; no one said anything about experimentation. Production code is what we're talking about.
If what you're saying is that your current experience involves a lot of process and friction to get small changes approved, that seems like a reasonable use case for hand-coding. I still prefer to make changes by hand myself when they're small and specific enough that explaining the change in English would be more work than directly making the change.
Even then, if there's any incentive to help the organization move more quickly, and there's no policy against AI usage, I'd give it a shot during the pre-coding stages. It costs almost nothing to open up Cursor's "Ask" mode and bounce your ideas off of Gemini or have it investigate the root cause of a bug.
What I typically do is have Gemini perform a broad initial investigation and describe its findings and suggestions with a list of relevant files, then throw all that into a Grok chat for a deeper investigation. (Grok is really strong at analysis in general, but its superpower seems to be a willingness to churn on sufficiently complex problems for as long as 5+ minutes in order to find the right answer.) I'll often have a bunch of Cursor agents and Grok chats going in parallel — bouncing between different bug investigations, enhancement plans, and one or two code reviews and QA tests of actual changes. Most of the time that AI saves isn't the act of emitting characters in and of itself.
Who declared it? Who cares what anyone declares? What do you think will actually happen? If software can be fully automated, then sure SWEs will need to find a new job. But why wouldn't it increase productivity instead and there still are developer jobs, just different.
> The declared goal of AI is to automated software engineering entirely.
Its hardly the first thing that has that as its “declared goal” (i.e., the fantasy sold by to investors to get capital and the media to get attention.)
This is kind of a myopic view of what it means to be a programmer.
If you're just in it to collect a salary, then yeah, maybe you do benefit from delivering the minimum possible productivity that won't get you fired.
But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.
> But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.
Maybe currently if you enjoy social engineering an LLM more than writing stuff yourself. Feels a bit like saying "if you like running, you'll love cars!"
In the future when the whole process is automated you won't be needed to make the computer do stuff, so it won't matter whether you would like it. You'll have another job. Likely one that pays less and is harter on your body.
Some people like running, and some people like traveling. Running is a fine hobby, but I'm still glad that planes exist.
Maybe some future version of agentic tooling will decimate software engineering as a career path, but that's just another way of saying that everyone and their grandmother would suddenly have the ability to launch a tech startup. Having gone through fundraising in the past, I'd personally prefer to live in a world where anyone with a good idea could get access to the equivalent of a full dev team without raising a dime.
But you're not making the computer do things, you're making an idea for a new thing a computer can do and then outsourcing the part of the "making it do things" that is actually fun and fulfilling. I don't get it -- the joy for me comes from learning and problem solving, not coming up with ideas and then communicating those ideas to a tool that can do the rest of the job for me.
I personally like AI but it has definitely shifted my job. There is less "writing code", more "reviewing code", and more "writing sentences in English". I can understand people being frustrated.
To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.
> To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.
I didn't want to get into management, because it's boring. Now I got forced into management and don't even get paid more.
> It's because I still need to earn a living and this technology threatens my ability to do so in the near future.
That's certainly not the reason most HNers are giving - I'm seeing far more claims that LLMs are entirely meaningless becauzs either "they cannot make something they haven't seen before" or "half the time they hallucinate". The latter even appears as one of the first replies in this post's link, the X thread!
given that there had never been a technological advancement that was successfully halted to preserve the jobs it threatened to make obsolete, don't you see the futility of complaining about it? even if there was widespread opposition to AI - and no, there isn't - the capital would disregard it. no ragtag team of quirky rebels are going to blow up this multi-trillion dollar death star.
> don't you see the futility of complaining about it?
I'm not complaining to stop this. I'm sure it won't be stopped. I'm explaining why some people who work for a living don't like this technology.
I'm honestly not sure why others do. It pretty much doesn't matter what work you do for a living. If this technology can replace a non-negligible part of the white collar workforce it will have negative consequences for you. You don't have to like that just because you can't stop it.
> This didn’t fully come out of the blue. We have been told to expect the unexpected.
It absolutely did. Five years ago people would have told you that white collar jobs where mostly un-automatable and software engineering was especially safe due to the complexity.
In a concrete sense, what actually happened came out of the blue. I fully agree with that. That's not what I mean.
> We have been told to expect the unexpected.
But this didn't.
What happened is unexpected. And we've been told to expect that.
I understand that that's very broad, but the older people teaching me had a sense of how fast technology was accelerating. They didn't have a tv back in the day. They knew that work would change fast and the demands of work would change fast.
The way I've been taught at school, it's to actually be that wary and cautious. You need to pivot, switch and upskill fast again.
What are humans better at than what AI isn't? So far, I've noticed it's being connected to other humans. So I'm currently at a job that pivots more towards that. I'm a data analyst + softwar engineer hybrid at the moment.
> Still, it is noticeable that with many of the AI companies claiming that their version of "AGI" is just around the corner, developers and staff don't appear to be particularly excited about this
Why would they be excited about it? There's little in it for them.
Nobody that has to work for a living should be excited for AI, they should be genuinely afraid of it. AGI will have vast, deeply negative consequence for almost everyone that has to work for a living.
This assumes we even need more Terence Taos by the time these kids are old enough. AI has gone from being completely useless to solving challening math problems in less than 5 years. That trajectory doesn't give me much hope that education will matter at all in a few years.