Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.

But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?



> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.

Honestly I think you’re swallowing some of the hype here.

I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.

The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.

The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.

I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.


It's not just "juniors". It's people who should know better turning out LLM junk outside their actual experience areas because "They are experienced enough to use LLMs".

There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.


I feel sorry for juniors because they have even less incentive to troubleshoot or learn languages. At the same time, the sheer size of APIs make me relieved that I will never have to remember another command, DSL, or argument list again. Ruby has hundreds of methods, Rails hundreds more, and they constantly change. I'd rather write a prompt saying what I mean than figure out obscure incantations, especially with infrequently used tools like ffmpeg.

Advent of Code (https://adventofcode.com/2025/about) says:

> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

I would advocate for Advent of Code in every workplace, but finding interest is rare. No matter how much craft is emphasized, ultimately businesses are concerned with solving problems. Even personally, sometimes I want to solve a problem so I can move on to something more interesting.


I’m a garage coder and the kind of engineer that has a license. I had the capacity with my kids to make a usable application for my work about once every 6 months. Now it’s once a weekend or so. You don’t have to believe it.


I didnt read the parent comment as celebrating this state. More like they were decrying it, and the blindness of people who just run on metrics.


This just happened to me this week.

I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.

AI just can't cope with this yet. So my team has been told that we are too slow.

Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).


> It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).

I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"


As it was foretold since the beginning, IA use is breaking security wantonly.


Fuck it - let them reap the consequences. Ideally wait until there's something particularly destructive, then do the post-mortem as publicly as possible - call out the structures and practises that enabled that commit to get into production.


Ouch, so painful to read.


I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.

It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.

The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.

LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.


> It's really easy to use LLMs to shift work onto other people.

This is so incredibly true.


I'm interested in this. Code review, most egregiously where the "author" neglected to review the LLM output themselves, seems like a clear instance. What are some other examples?

Something that should go in a "survival guide" for devs that still prefer to code themselves.


Well, if you take "review the LLM output" in its most general way, I guess you can class everything under that. But I think it's worth talking about the problem in a bit more detail than that, because someone can easily say "Oh I definitely review the LLM output!" and still be pushing work onto other people.

The fact is that no matter whether we review the LLM output or not, no matter whether we write the code entirely by hand or not, there's always going to be the possibility of errors. So it's not some bright-line thing. If you're relatively lazier and relatively less thoughtful in the way you work, you'll make more errors and more significant errors. You'll look like you're doing the work, but your teammates have to do more to make up for the problems.

Having to work around problems your coworkers introduced is nothing new, but LLMs make it worse in a few ways I think. One is just, that old joke about there being four kinds of people: lazy and stupid, industrious and stupid, smart and lazy, and industrious and smart. It's always been the "industrious and stupid" people that kill you, so LLMs are an obvious problem there.

Second there's what I call the six-fingered hands thing. LLMs make mistakes a human wouldn't, which means the problem won't be in your hypothesis-space when you're debugging.

Third, it's very useful to have unfinished work look unfinished. It lets you know what to expect. If there's voluminous docs and tests and the functionality either doesn't work at all or doesn't even make sense when you think about it, that's going to make you waste time.

Finally, at the most basic level, we expect there to be some sort of plan behind our coworkers' work. We expect that someone's thought about this and that the stuff they're doing is fundamentally going to be responsive to the requirements. If someone's phoning it in with an LLM, problems can stay hidden for a long time.


I'm currently really feeling the pain the side bar stuff. The non "application" code/config.

Scripts, cicd, documentation etc. The stuff that gets a PR but doesn't REALLY get the same level of review because its not really production code. But when you need to go tweak the thing it does a few months or years later... its so dense and undecipherable you spend more time figuring out how the llm wrote the damn thing than doing it all over yourself.

Should you probably review it a little harsher in the moment? sure, but thats not always feasible with things that are at the time "not important" and only later become the root of other things.

I have lost several hours this week to several such occurences.


AI-generated docs, charts, READMEs, TOE diagrams. My company’s Confluence is flooded with half assed documentation from several different dev teams that either loosely matches or doesn’t match at all the behavior or configuration of their apps.

For example they ask to have networking configs put into place and point us at these docs that are not accurate and then they expect that we’ll troubleshoot and figure out what exactly they need. It’s a complete waste of time and insulting to shove off that work onto another team because they couldn’t be fucked to read their own code and write down their requirements accurately.


If I were a CTO or VP these days I think I'd push for a blanket ban on committing docs/readmes/diagrams etc along with the initial work. Teams can push stuff to a `slop/` folder but don't call it docs.

If you push all that stuff at the same time, it's really easy to get away with this soft lie, "job done". They can claim they thought it was okay and it was just an honest mistake there were problems. They can lie about how much work they really did.

READMEs or diagrams that are plans for the functionality are fine. Docs that describe finished functionality are fine. Slop that dresses up unfinished work as finished work just fucks everything up, and the incentives are misaligned so everyone's doing this.


Bugs. In our project developers are now making x4 amount of bugs comparing to 2024. Same developers, but now with Cursor.

Basically they are pushing their work to the test engineers or whoever is doing testing (might be end users).


> It's really easy to use LLMs to shift work onto other people.

This is my biggest gripe with LLM use in practice.


The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.

The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.

edit: sentence fixed.


I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.

You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.

We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.


While a general "crashing down" probably will not happen I could imagine some differences to other mass produced goods.

Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.

Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.

I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.

I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.


I think one major lesson of the history of the internet is that very few people actually care about privacy in a holistic, structural way. People do not want their nudes, browsing history and STD results to be seen by their boss, but that desire for privacy does not translate to guarding their information from Google, their boss, or the government. And frankly this is actually quite rational overall, because Google is in fact very unlikely to leak this information to your boss, and if they did it would more likely to result in a legal payday rather than any direct social cost.

Hacker news obviously suffers from severe selection bias in this regard, but for the general public I doubt even repeated security breaches of vibe coded apps will move the needle much on the perception of LLM coded apps, which means that they will still sell, which means that it doesn't matter. I doubt even most people will pick up the connection. And frankly, most security breaches have no major consequences anyway, in the grand scheme of things. Perhaps the public conscioussness will harden a bit when it comes to uploading nudes to "CheckYourBodyFat", but the truly disastrous stuff like bank access is mostly behind 2FA layers already.


Poor quality is not synonymous with mass production. It's just cheap crap made with little care.


> The era of software mass production has begun.

We've been in that era for at least two decades now. We just only now invented the steam engine.

> I wonder how long it takes until this comes all crashing down.

At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.


There's a buge difference between possible and likely.

Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet


Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.

The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.


what is (and I'm being generous with the base here) 0.95^10? A 10-step process with a 95% success rate on each.


Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.

Perhaps that’s what lead to a decline in accountability and quality.


The decline in accountability has been in progress for decades, so LLMs can obviously not have caused it.

They might of course accelerate it if used unwisely, but the solution to that is arguably to use them wisely, not to completely shun them because "think of the craft and the jobs".

And yes, in some contexts, using them wisely might well mean not using them at all. I'd just be surprised if that were a reasonable default position in many domains in 5-10 years.


> Bad engineering is possible with and without LLMs

That's obvious. It's a matter of which makes it more likely


> Bad engineering is possible with and without LLMs.

Is Good Engineering possible with LLMs? I remain skeptical.


Why didn't programmers think of stepping down from their ivory towers and start making small apps which solve small problems? That people and businesses are very happy to pay for?

But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.

There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?

My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.

Instead of earning a very good living by making boutique software for paying users.


I don't think programmers are the issue here. What you describe sounds to me more like the typical product management in a company. Stuff features into the thing until it bursts of bugs and is barely maintainable.

I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.

You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)


There's a whole bunch of apps for invoicing, but if you try them, you'll see that they are excessively complicated. Probably because they want to cover all bases of all use cases. Meaning they aren't great for any use case. Like you say.

The invoicing app in particular I was referring to is Cakedesk. Made by a solo developer who sells it for a fair price. Easy to use and has all the necessary functions. Probably the name and the icon is holding him back, though. As far as I understand, the app is mostly a database and an Electron/Chromium front-end, all local on your computer. Probably very simple and uninteresting for a programmer, but extremely interesting for customers who have a problem to solve.


One person's "excessively complicated" is another person's "lackluster and useless" because it doesn't have enough features.


Yes, enterprise needs more complicated setups. But why are programmers only interested in enterprise scale stuff?


I'm curious: why don't YOU create this app? 95% of a software business isn't the programming, it's the requirements gathering and marketing and all that other stuff.

Is it beneath YOUR dignity to create this? What an untapped market! You could be king!

Also it's absurd to an incredible degree to believe that any significant portion of programmers, left to their own devices, are eager to make "big, complicated, enterprise-scale" software.


What makes you think that I know how to program? It's not beyond my dignity, it's beyond my skills. The only thing I can do is support boutique programmers with my money as a consumer, and I'm very happy to do that.

But yes, sometimes I have to AI code small things, because there's no other solution.


Solving these problems requires going outside and talking to people to find out what their problems are. Most programmers aren't willing to do that.


Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.


> Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.

Equally, my read is you're fixating on the syntax used in their comment to insulate yourself from actually engaging with their idea and point. You refuse to try to understand the parts of the system that negate the surface level popularity, eer productivity gains.

People who enjoy the productivity boost of AI are right, you can absolutely, without question build a house faster with AI.

The people who claim there's not really any reasonable productivity gains from AI are also right, because using AI to build a multistory anything, requires you to waste all that time starting with a house, to then raze it to the ground and rebuild a usable foundation.

yes, "but its useful in specific domains" is technically correct statement, but whataboutism is rarely a useful conversational response.


If AI is making you more productive, then I doubt you were very productive pre-AI


I had a software engineering job before AI. I still do, but I can write much more code. I avoid AI in more mission-critical domains and areas where it is more important that I understand the details intimately, but a lot of coding is repetitive busywork, looking for "needles in haystacks", porting libraries, etc. which AI makes 10x easier.


The denial/cope here is insane


My experience with using AI is that it's a glorified stack overflow copy paster. It'll even glue a handful of SO answers together!

But then you run into classic SO problems... Like the first solution doesn't work. Nor the second one. And the third one introduces a completely different coding style. The last one is implemented in pure sh/GNU utils.

One thing it is absolutely amazing at: digesting things that have bad documentation, like openssl C api. Even then you still gotta be on the watch for hallucinations, and audit it very thoroughly.


> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.

I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.


> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.

You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.

I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.

There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.

> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.

Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.


Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite.


> Is that why open source progress has generally slowed down since 2023?

Citation needed for a clam of that magnitude.


> But you have to actually learn how to use them.

This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.

We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.


> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_.

I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.

But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.


I'd be all for it if its truly disconnected from big tech entities.


> This probably is the issue for me, I am simply not willing to do so.

Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.

> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.

For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.

It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.

The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.

I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.


> If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?

It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.

Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.


In my experience I saw the complete opposite of "juniors looking like savants", there are a few pieces of code made by some juniors and som mid engineers in my company(one also involving a senior) that were clearly made with AI, and they are such a mess that they haven't been touched ever since because it's just impossible to understand, and this wasn't caught in the PR because the size of it was so large that people didn't actually bother reading it.

I did see a few good senior engineers using AI and producing good code, but for junior and mid engineers I have witnessed the complete opposite.


> If your org is blindly data/metric driven

Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?


Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output.

It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.


Every early stage startup is absolutely metric driven: keeping the business alive based on Runway


“Blindly” is the operative word here.


That’s almost an oxymoron

You can’t be data driven and also blind to the data

You might be optimizing for the wrong thing, but it’s not blind, it’s just a bad “model”


The blindness is to reality and nuance.

If you stare at your GPS and don’t pay attention to what’s in the real world outside your windshield until you careen off a cliff that would be “blindly” following your GPS. You had data but you didn’t sufficiently hedge against your data being incomplete.

Likewise sticking dogmatically to your metrics while ignoring nuance or the human factor is blindly following your metrics.


> You can’t be data driven and also blind to the data

"Tickets closed" is an amazing data driven & blind to the data metric. You can have someone closing an insane number of tickets, looking amazing on the KPIs, but no one's measuring "Tickets reopened" or "Tickets created for the same issue a day later".

It's really easy to set up awful KPIs and lose all sight of what is actually happening while having data to show your bosses


That’s a good example for sure - I’d still argue it’s a problem of using the wrong economic model

Success = tickets closed, is wrong, but data driven




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: