Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's really embarrassing to put a stake in these. It's like when Gary Marcus said that an image generator won't be able to draw a horse riding an astronaut and will make the astronaut ride the horse because that is a more frequent pattern in the training set. Or when an urban legend (academic legend?) started that state of the art classifiers misclassify cows when they are on sandy beaches and it only works on grass (it happened in some cases with small datasets and shortcut learning but there was no sota image classifier with such glaring and straightforward errors, but it trickled down to popular consciousness and grade school teaching materials on AI limitations.) Now it's about hallucinating nonexistent libraries. But reasoning models and RAG and large contexts and web search make this much less of an issue. The limitations everyone point at that trickle down to a soundbite that everyone repeats usually don't turn out to be fundamental limitations at all.


It is not about fundamental limitations. It is about black boxes and magic. If you don't understand a system then you know not what lurks within and how it can bite you in the ass. Black boxes break - and when they break, you are helpless.

If you already know how to build software LLM are a godsend. I have actually had a quite a nice case recently when LLM invented some quite nice imaginary graphql mutators. I had enough experience in the field to not waste time debugging, a historian that hadn't shipped software before won't.

There were WYSIWYG before, before them was visual programming - we have tried to abstract that pesky complexity since forever. So far with no success. I don't see anything in LLM/Gen AI whatever that will change it. It will make good people more productive, it would make sloppy sloppier, it won't make people bad at solving problems good at it.


Yes, the consequences will be that mediocre devs will have a harder time finding jobs. The higher skilled will be fine (for some time at least), but the trend is clear. LLMs are going to make the top engineers more effective and the less skilled will have less and less to contribute. The skill floor for useful contributions is rising. Just like in other industries. Most of the world's software projects aren't at the forefront of humanity's knowledge that needs brilliant minds. It's integration, dependencies, routing around edge cases, churning out some embedded code for another device, mostly standard stuff but enough differences that previous automation paradigms couldn't handle the uniqueness. LLMs plus a few highly skilled people (who know how to prompt them effectively) will get it done in less time than a team does today.

It's not that LLMs turn low skilled people into geniuses, it's that a large segment of even those with enough cognitive skills to work in software today will no longer have marketable skill levels. The experienced good ones will have some, but a lot won't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: