Wow I knew many people had anti-AI sentiments, but this post has really hit another level.
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
It’s interesting, it already is the former for niche areas in coding (e.g., basic web dev tasks). But as a whole for areas like social media or increased surveillance it could very well be a negative, and those affect a whole lot more people than coding and having more software would.
Thank you for at least acknowledging that we may eventually feel differently about AI.
I'm so tired of being called a luddite just for voicing reservations. My company is all in on AI. My CEO has informed us that if we're not "100% all in on AI", then we should seek employment elsewhere. I use it all day at work, and it doesn't seem to be nearly enough for them.
Executives like that make me daydream about different ways to build medium/large organizations that put executives on the bottom (or not have any) and engineers on top.
These little lords of small fiefdoms make my skin crawl
I wonder if we would still call it "knowledge work" if no human knowledge/experience is required or in the loop anymore. And also if we will stop looking up to that generally.
Because AI stands at odds with the concept of meritocracy I also wonder if we will stop democratically electing other humans and outsource such tasks as well.
Overall I'm not seeing it. Progress is already slow and so far I personally think what AI can do is a nice party trick but it remains unimpressive if judged rigorously.
It doesn't matter if it can one shot code a game in a few minutes. The reason why a game made by a human is probably still better is because the human spends hours and days of deep focus to research and create it. It is not at all clear that, given as much time, AI could deliver the same results.
Seems like a good decision if they are trying to avoid consumers and focus on professional users who are more likely to create an account and pay. Especially if they are constrained on compute.
A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.
The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.
If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.
The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.
It sounds like you're building a lot of prototypes or small projects, which yes LLMs can be amazingly helpful at. But that is very much not what many/most professional engineers spend their time on, and generalizing from that former case often doesn't hold up in my experience.
We use both Claude and Codex on a fairly large ~10-years old Java project (~1900 Java files, 180K lines of code). Both tools are able to implement changes across several files, refactor the code, add unit tests for the modified areas.
Sometime the result is not great, sometimes it requires manual updates, sometimes it just goes into a wrong direction and we just discard the proposal. The good thing is you can initiate such a large change, go get a coffee, and when you're back you can take a look at the changes.
Anyway, overall those tools are pretty useful already.
"sheer number" combined with "completed" sounds more like lots of small projects (likely hobbyist or prototypes) than it does anything large/complicated/ongoing like in a professional setting.
It is, at this point, rather suspect that there are mountains of anecdata, but pretty much no high quality quantitive data (and what there is is mixed at best). Fun fact; worldwide, over 200 million people use homeopathy on a regular basis. They think it works. It doesn't work.
That's what it really all comes down to, isn't it?
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.
Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.
To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.
There's a tradeoff point somewhere where it makes sense to go with one or another. You can write a lot of codes in bash and Elisp without having to care about the type of whatever you're manipulating. Because you're handling one type and encoding the actual values in a typesytem would be very cumbersome. But then there are other domain which are fairly known, so the investment in encoding it in a type system does pay off.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
This feels like the new version of not using version control or never making backups of your production database. It’ll be fine until suddenly it isn’t.
I have hourly snapshots of everything important on that machine and I can go back through the network flow logs, which are not on that device, to see if anything was exfiltrated long after the fact.
It's not like I'm running it where it could cause mayhem. If I ever run it on the PCI-DSS infra, please feel free to terminate my existence because I've lost the plot.
It’s easy to be against it now because so much content that people recognise as AI is also just bad. If professionals can start to use it to produce content that is actually good, I think opinions will shift.
There are a lot of AI videos that you can very easily tell are AI, even if they are done well. For example, I just saw a Higgsfield video of a kangaroo fighting in the UFC. You can tell it is AI, mainly because it would be an insane amount of work to create any other way. But I think it is getting close to good enough that a lot of people, even knowing it is AI, wouldn't care. Everyone other than the most ardent anti-AI people are going to be fine with this when we have people creating interesting and engaging media with AI.
I think we will look back at AI "slop" as a temporary point in time where people were creating bad content, and people were defending it as good even when it was not. Instead, as you say, AI video will fall into the background as a tool creators use, just like cameras or CGI. But in my opinion it won't be that people can't tell that AI was used at all. Rather, it will be that they won't care if there is still a creative vision behind it.
At least, that is what I hope compared to the outcome where there are no creators and people just watch Sora videos tailored to them all day.
We already have digital IDs in Australia, and it seems like a natural fit for this. The digital ID doesn't need to share much information with social media companies, it just needs to confirm your age. And then we don't need new 3rd-parties holding our personal information.
Also yes, voting is mandatory in Australia. You get a small fine if you don't vote.
It's a very good system. $20 is the right number to get you off the couch, but not so much as to cripple you. There are exceptions if you have a valid reason for not voting. The maximum fine is ~$180 so you can't simply ignore the Elections Commission and hope it goes away.
> These kind of tasks ought to be have been automated a long time ago.
It’s much easier to write business logic in code. The entire value of CRUD apps is in their business logic. Therefore, it makes sense to write CRUD apps in code and not some app builder.
And coding assistants can finally help with writing that business logic, in a way that frameworks cannot.
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
reply