Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.

People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.

I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.

I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.

I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.



> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.

There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.

> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.

For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.

Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.


I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.

If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.

I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.


> almost every line of code I write usually requires some deliberate though

That's probably the worst case for trying to use a LLM for coding.

A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.

Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.

Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: