Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.


Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.


But the vast majority are still using an IDE - and I say this as someone who has adamantly used Vim with plugins for decades.

Something similar will happen with agentic workflows - those who aren't already productive with the status quo will have to eventually adopt productivity enhancing tooling.

That said, it isn't too surprising if the rate of AI adoption starts slowing down around now - agentic tooling has been around for a couple years now, so it makes sense that some amount of vendor/tool rationalization is kicking in.


It remains to be seen whether these tools are actually a net enhancement to productivity, especially accounting for longer-term / bigger-picture effects -- maintainability, quality assurance, user support, liability concerns, etc.

If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.

Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.


I'm not necessarily talking about AI code agents or AI code review (workflows which I think are difficult for agentic workflows to really show a tangible PoV against humans, but I've seen some of my portfolio companies building promising capabilities that will come out of stealth soon), but various other enhancements such as better code and documentation search, documentation generation, automating low sev ticket triage, low sev customer support, etc.

In those workflows and cases where margins and dollar value provided is low, I've seen significant uptake of AI tooling where possible.

Even reaching this point was unimaginable 5 years ago, and is enough to show workflow and dollar value for teams.

To use another analogy, using StackOverflow or Googling was viewed derisively by neckbeards who constantly spammed RTFD back in the day, but now no developer can succeed without being able to be a proficient searcher. And a major value that IDEs provided in comparison to traditional editors was that kind of recommendation capability along with code quality/linting tooling.

Concentrating on abstract tasks where the ability to benchmark between human and artificial intelligence is difficult means concentrating on the trees while missing the forest.

I don't foresee codegen tools replacing experienced developers but I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.


> I've seen significant uptake of AI tooling where possible.

Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.

> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.

That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.

Unpacking some of your examples a bit --

Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?

Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.

Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.

Low sev customer support: Sounds like a good way to infuriate customers and harm the business.


Hard agree on documentation. In my view, generated documentation is utterly worthless, if not counterproductive. The point of documentation is to convey information that isn't already obvious from the code. If the documentation is just a padded, wordy text extrapolated from the code, reading it is a complete waste of time.


Another thing here is that LLMs don't have to be a productivity boost if it lets you be lazier. Sometimes I'll have an LLM do something and it doesn't save time compared to me doing it but I can fuck off while it's working and grab a drink or something. I can spend my mental energy on hard problems rather than looking through docs to find all of the right functions and plumb things in the code.


OK, but LLMs are being valued as if they are one of the most important technologies ever created. How much will companies pay for a product that doesn't boost productivity but allows employees to be lazier?


I think no one can predict what will happen. We need to wait until we can empirically observe who will be more productive on certain tasks.

Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.


The good thing is that the selling point of LLM tools is that they're dead easy to use, so even if you find yourself having to do them in the future, it won't be an issue. I know the AI faithful love talking about how non-believers will be "left behind", and stylize prompt engineering as some kind of deeply involved, complex new science, it really isn't. As more down-to-earth AI fanatics have confirmed to me, it'll probably take you an afternoon of reading some articles on best practices and you'll be back amongst the best of them. This isn't like learning a new language or framework.


That’s fine, but you don’t want to be blind sided by changes in the industry. If it’s not for you, have a plan B career lined up so you can still put food on the table. Also, if you are good at old fashioned SE and AI, you’ll be OK either way.


As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.

Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.


People also started using vi edit mode inside IDEs. I've personally encountered that much more often.


It depends where you work. In gaming, the best programmers I know might not even touch the command-line / Linux, and their "life" depens on Visual Studio... Why? Because the eco-system around Visual Studio / Windows and how game console devkits work is pretty much tied - while Playstation is some kind of BSD, and maybe Nintendo - all their proper SDKs are just for Windows and tied around Visual Studio (there are some studios that are the exceptions, but rare).

I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.


> Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.

The best programmers I know are game programmers using Visual Studio. Real Visual Studio, not Visual Studio Code.

(vim is definitely a big thing but I'm not sure how many people I know who even use emacs anymore...)


I’m sceptical

The models seem to still (claude opus 4.5) not get things right, and miss edge cases, and work code in a way that’s not very structured.

I use them daily, but I often have to rewrite a lot to reshape the codebase to a point where it makes sense to use the model again.

I’m sure they’ll continue to get better, but out of a job better in 5 years? I’m not betting on it.


Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.

It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.


Why would you go full on ? There is no learning curve it seems like. What is there to learn about using AI to code?


The learning curve is actually huge. If you just vibe code with AI, the results are going to suck. You basically have to reify all of your software engineering artifacts and get AI to iterate on them and your code as if it were am actual software engineering (who forgot everything whenever you rebooted it, so that’s why you have to make sure it can re-read artifacts to get its context back up to speed again). So a lot more planning, design, and test documentation than you would do in a normal project. The nice thing is that AI will maintain all of it as long as you set up the right structure.

We are also in the early days still, I guess everyone has their own way of doing this ATM.


By this point you've burnt up any potential efficiency gains. So you spent a lot of hours learning a new tool which you then have to spend a lot of additional hours to babysit and correct, so much that you'll be very far from those claimed productivity gains. Plus the skills you need to verify and fix it will atrophy. So that learning curve earns you nothing expect the ability to put "AI" somewhere on your CV, which I expect will lose a lot of its lustre in 1-2 years time when everybody has made enough experiences with vibe coders who don't, or no longer can, enusre the quality of their super-efficient output.


This is all bullshit btw.

Speaking as someone with a ton of experience here.

None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.

All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.

You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.

That's the reality.

Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.


Care to share some links?

Not this one, presumably: https://en.wikipedia.org/wiki/Carl_Robert_Brown


He's the youtuber "The Internet of Bugs"


That’s interesting but how much of this if written down, documented and made into video tutorials could be learnt by just about any good engineer in 1-2 weeks?


I don’t see much yet, maybe everyone is just winging it until someone influential gives it a name. The vibe coding crowd have set us back a lot, and really so did the whole leetcode interview fad that are just throwing off. It’s kind of obvious though: just tell the AI to do what a normal junior SWE does (like write tests), but write a lot more documentation because they forget things all the time (a junior engineer who makes more mistakes, so they need to test more, and remembers nothing).


The trick is being a good engineer in the first place.


The concepts in the LLMs latent space are close to each other and you find them by asking in the right way, so if you ask like an expert you find better stuff.

For it to work best you should be an expert in the subject matter, or something equivalent.

You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).

Garbage in garbage out is as important as ever.


I hope you are joking and/or being sarcastic with this comment…


I don't think they really are.

There is, effectively, a "learning curve" required to make them useful right now, and a lot of churn on technique, because the tools remain profoundly immature and their results are delicate and inconsistent. To get anything out of them and trust what you get, you need to figure out how to hold them right for your task.

But presuming that there's something real here, and there does seem to be something, eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. The whole vision of them is to make the work easier, more accessible, and more productive, after all. Having a big learning curve doesn't align with that vision.

Unless they happen to make you more significantly productive today on the tasks you want to pursue, which only seems to be true for select people, there's no particular reason to be an early adopter.


fantastic comment! I disagree on two fronts:

- we are far removed from “early adopter” stages at this point

- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.

it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)


If you spent 1200 hours not using it you would have matured in your craft 3x more and figured out far better ways of doing things.


been hacking 3 decades so exponentially north of 1,200 hours ... in my career the one trait that always seems to differentiate great SWEs from decent/mediocre/awful ones is laziness.

the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.


What is this fantasy about people being unemployed? The layoffs we’ve seen don’t seem to be discriminating against or in favor of AI - they appear to be moves to shift capital from human workers to capex for new datacenters.

It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.

The real nuts and bolts are still software engineering. Or is that going to change too?


I don't think their will be massive unemployment based on actual "AI has removed the need for SWEs of this level..." kind of talk but I was specifically commenting on eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. ... If this actually did happen (it won't) then we'd all have to worry about being unemployed


They'll be more employable, not less. Since they're the only ones who will be able to fix the huge mess left behind by the people relying on them.


Never in the history of tech did luddites have an advantage in employment.


I mean, yeah they did, in this sense literally all the time. The people who generated crap copy pasting from stack overflow or generated scaffold with tools without understanding that were literally the kind of programmers you tried to weed out.

This is equivalent of that.


Crappy engineers are going to be crappy engineers, so what?

> This is equivalent of that.

In the hands of a crappy engineer from above, you are correct.


It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.


With all due respect, claiming “AI has no learning curve” can be an effective litmus test to see who has actually dig into agentic AI enough to give it a real evaluation. Once you start to peel back the layers of how to get good output you understand just how much skill is involved. Its very similar to being a “good googler”. Yeah on its face it seems like it shouldn’t be a thing but absolutely there are levels to it, and its a skill that must be learned.


There is nothing to learn, the entry barrier is zero. Any SWE can just start using it when they really need to.


Some of us will need time to learn to give less of a shit about quality.


Or you could learn how to do it the right way with quality intact. But it’s definitely your choice.


Good. The smartest and best should be cutting out middlemen and selling something of their own instead of keep shoveling all the money up the company pyramids. I think the pyramids will become easier and easier to spot their trash and avoid


> ... an SWEs who don’t learn how to use it consistently ...

an SWE does not necessarily need to "learn" Claude Code any more than someone who does not know programming at all to be able to use the tool effectively. What actually matters is that they know how things should be done without coding assistants, they understand what the tools may be doing, and then give directions/correct mistakes/review code.

In fact, I'd argue tools should be simple and intuitive for any engineer to quickly pick up. If an engineer who has solid background in programming but with no prior experience with the tools cannot be productive with such a tool after an hour, it is the tool that failed us.

You don't see people talk about "prompt engineering" as much these days, because that simply isn't so important any more. Any good tool should understand your request like another human does.


People dont talk about prompt engineering because it has become “context engineering“. Agentic AI is the real deal future


Don't think so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: