Hacker Newsnew | past | comments | ask | show | jobs | submit | cruffle_duffle's commentslogin

The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.

They wouldn’t even know where to begin!


You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.

You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.

This is still a stumbling block for a lot of people. Plenty of people could've found an answer to a problem they had if they had just googled it, but they never did. Or they did, but they googled something weird and gave up. AI use is absolutely going to be similar to that.

Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.

Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.


Insurance is complicated, not frequently discussed online, and all code depends on a ton of domain knowledge and proprietary information.

I'm in a similar domain, the AI is like a very energetic intern. For me to get a good result requires a clear and detailed enough prompt I could probably write expression to turn it into code. Even still, after a little back and forth it loses the plot and starts producing gibberish.

But in simpler domains or ones with lots of examples online (for instance, I had an image recognition problem that looked a lot like a typical machine learning contest) it really can rattle stuff off in seconds that would take weeks/months for a mid level engineer to do and often be higher quality.

Right in the chat, from a vague prompt.


Step one: you have to know to ask that. Nobody in that orbit knows how to do that. And these aren’t dumb people. They just aren’t devs.

3d printing is something I think about. LLMs do their best work against text and 3d printers consume gcode. I’ve had sonnet spit out perfectly good single layer test prints. Obviously it won’t have the context window to hold much more gcode BUT…

If there was a text based file format for models, it could generate those and you could hand that to the slicer. Like I’ve never looked, but are stl files text or binary? Or those 3mf files?

If Gemini can generate a good looking pelican on a bicycle SVG, it can probably help design some fairly useful functional parts given a good design language it was trained on.

And honestly if the slicer itself could be driven via CLI, you could in theory do the entire workflow right to the printer.

It makes me wonder if we are going to really see a push to text-based file formats. Markdown is the lingua franca of output for LLMs. Same with json, csv, etc. Things that are easy to “git diff” are also easy for LLMs…


There is a text based file format for models. It's called OpenSCAD. It's also much more information compacted than a mesh model file like STL - e.g. in OpenSCAD you describe the curve, in the mesh file like STL you explicitly state all elements of it.

It's just gimped to the point that you can basically only use it for hobbyist projects, anything reasonably professional looking is using STEP compatible files and that is much more complex to try to emulate and get right. STEP is a bit different - it's more like a mesh in that it contains the final geometry, but in BRep which is pretty close to the machining grade, while OpenSCAD is more like what you're asking about - a textual recipe to generate curves that you pass into an engine that turns it into the actual geometry. It's just that OpenSCAD is so wholly insufficient to express what professional designs need it never gets used in the professional world.


New metric: agent-hours spent on a task. Or so we measure in tokens. Clearly more tokens burned == more experience right?

There are actually books which recommend that organizations track employee tokens burned as a proxy for AI adoption. Surprised me a bit.

it's the only KPI available.

Honestly responses like this should just be straight blocked by the moderators. They are so super lame and go directly against the rules.

The more I dive into this space the more I think that developers will still be in heavy demand—just operating in a different level of abstraction most of the time. We will need to know our CS fundamentals, experience will still matter, juniors will still be needed. It’s just that a lot of time time the actual code being generated will come from our little helper buddies. But those things still need a human in the seat to drive them.

I keep asking myself “could my friends and family be handed this and be expected to build what I’m building on them” and the answer is an immediate “absolutely not”. Could a non technical manager use these tools do build what I’m building? Absolutely not. And when I think about it, it’s for the exact same reason it’s always been… they just aren’t a developer. They just don’t “think” in the way required to effectively control a computer.

LLMs are just another way to talk to a machine. They aren’t magic. All the same fundamental principles that apply to probably telling a machine what to do still apply. It’s just a wildly different mechanism.

That all being said, I think these things will dramatically speed up the pace that software eats the world. Put LLMs into a good harness and holy shit it’s like a superpower… but to get those superpowers unlocked you still have to know the basis, same as before. I think this applies to all other trades too. If you are a designer you still have to what good design is and how to articulate it. Data scientists still need to understand the basics of their trade… these tools just give them superpowers.

Whether or not this assertion remains true in two or three years remains to be seen but look at the most popular tool. Claude code is a command line tool! Their gui version is pretty terrible in comparison. Cursor is an ide fork of vscode.

These are highly technical tools requiring somebody that knows file systems, command lines, basic development like compilers, etc. they require you to know a lot of stuff most people simply don’t. The direction I think these tools will head is far closer to highly sophisticated dev tooling than general purpose “magic box” stuff that your parents can use to… I dunno… vibe code the next hit todo app.


> The more I dive into this space the more I think that developers will still be in heavy demand—just operating in a different level of abstraction most of the time. We will need to know our CS fundamentals, experience will still matter, juniors will still be needed. It’s just that a lot of time time the actual code being generated will come from our little helper buddies. But those things still need a human in the seat to drive them.

It’s disheartening that programmers are using this advanced, cutting-edge technology with such a backwards, old-fashioned approach.[1]

Code generation isn’t a higher level abstraction. It’s the same level but with automation.

See [1]. I’m open to LLMs or humans+LLMs creating new abstractions. Real abstractions that hide implementation details and don’t “leak”. Why isn’t this happening?

Truly “vibe coding” might also get the same job done. In the sense of: you only have to look at the generated code for reasons like how a C++ programmer looks at the assembly. Not to check if it is even correct. But because there are concerns beyond just the correctness like code gen size. (Do you care about compiler output size? Sometimes. So sometimes you have to look.)

[1]: https://news.ycombinator.com/item?id=44163821


I believe you’re arriving at the wrong conclusion because you’re comparing to an opposite instead of to someone slightly worse than you. Will this enable people at the edge to perform like you? That’s the question. Will there be more developers? Will they compete with you?

> LLMs are just another way to talk to a machine. They aren’t magic.

I will still opt for a scriptable shell. A few scripts, and I have a custom interface that can be easily composed. And could be run on a $100 used laptop from ebay.


Those people are absolutely going to get left in the dust. In the hands of a skilled dev, these things are massive force multipliers.

That's one of the sentiments I don't quite grasp, though. Why can't they just learn the tools when they're stable? So far it's been sooo many changes in workflows, basically relearn the tools every three months. It's maybe a bit more stabilized the last year, but still one could spend an enormous amount of time twiddling with various models or tools, knowledge that someone else probably could learn quicker at a later time.

"Being left in the dust" would also mean it's impossible for new people / graduates to ever catch up. I don't think it is. Even though I learned react a few years after it was in vogue (my company bet on the wrong horse), I quickly got up to speed and am just as productive now as someone that started a bit earlier.


Not the person you asked, but my interpretation of “left in the dust” here (not a phrasing I particularly agree with) would be the same way iOS development took off in the 2010s.

There was a land rush to create apps. Basic stuff like the flash light, todo lists, etc, were created and found a huge audience. Development studios were established, people became very successful out of it.

I think the same thing will happen here. There is a first mover advantage. The future is not yet evenly distributed.

You can still start as an iOS developer today, but the opportunity is different.


I’m not sure your analogy is applicable here.

The introduction of the App Store did not increase developer productivity per se. If anything, it decreased developer productivity, because unless you were already already a Mac developer, you had to learn a programming language you've never used, Objective-C, (now it's largely Swift, but that's still mainly used only on Apple platforms) and a brand new Apple-specific API, so a lot of your previous programming expertise became obsolete on a new platform. What the App Store did that was valuable to developers was open up a new market and bring a bunch of new potential customers, iPhone users, indeed relatively wealthy customers willing to spend money on software.

What new market is brought by LLMs? They can produce as much source code as you like, but how exactly do you monetize that massive amount of source code? If anything, the value of source code and software products will drop as more is able to be produced rapidly.

The only new market I see is actually the developer tool market for LLM fans, essentially a circular market of LLM developers marketing to other LLM developers.

As far as the developer job market is concerned, it's painfully clear that companies are in a mass layoff mood. Whether that's due to LLMs, or whether LLMs are just the cover story, the result is the same. Developer compensation is not on the rise, unless you happen to be recruited by one of the LLM vendors themselves.

My impression is that from the developer perspective, LLMs are a scheme to transfer massive amounts of wealth from developers to the LLM vendors. And you can bet the prices for access to LLMs will go up, up, up over time as developers become hooked and demand increases. To me, the whole "OpenClaw" hype looks like a crowd of gamblers at a casino, putting coins in slot machines. One thing is for certain: the house always wins.


My take is more optimistic.

I think it will make prototyping and MVP more accessible to a wider range of people than before. This goes all the way from people who don't know how to code up to people who know very well how to code, but don't have the free time/energy to pursue every idea.

Project activation energy decreases. I think this is a net positive, as it allows more and different things to be started. I'm sure some think it's a net negative for the same reasons. If you're a developer selling the same knowledge and capacity you sold ten years ago things will change. But that was always the case.

My comparison to iOS was about the market opportunity, and the opportunity for entrepreneurship. It's not magic, not yet anyway. This is the time to go start a company, or build every weird idea that you were never going to get around to.

There are so many opportunities to create software and companies, we're not running out of those just because it's faster to generate some of the code.


What you just said seems reasonable. However, what the earlier commenter said, which led to this subthread, seems unreasonable: those people unwilling to try the tools "are absolutely going to get left in the dust."

Returning to the iOS analogy, though, there was only a short period of time in history when a random developer with a flashlight or fart app could become successful in the App Store. Nowadays, such a new app would flop, if Apple even allowed it, as you admitted: "You can still start as an iOS developer today, but the opportunity is different." The software market in general is not new. There are already a huge number of competitors. Thus, when you say, "This is the time to go start a company, or build every weird idea that you were never going to get around to," it's unclear why this would be the case. Perhaps the barrier to entry for competitors has been lowered, yet the competition is as fierce as ever (unlike in the early App Store).

In any case, there's a huge difference between "the barrier to entry has been lowered" and "those who don't use LLMs will be left in the dust". I think the latter is ridiculous.

Where are the original flashlight and fart app developers now? Hopefully they made enough money to last a lifetime, otherwise they're back in the same boat as everyone else.


> In any case, there's a huge difference between "the barrier to entry has been lowered" and "those who don't use LLMs will be left in the dust". I think the latter is ridiculous.

Yeah, it’s a bit incendiary, I just wanted to turn it into a more useful conversation.

I also think it overstates the case, but I do think it’s an opportunity.

It’s not just that the barrier to entry has been lowered (which it has) but that someone with a lot of existing skill can leverage that. Not everyone can bring that to the table, and not everyone who can is doing so. That’s the current advantage (in my opinion, of course).

All that said, I thought the Vision Pro was going to usher in a new era of computing, so I’m not much of a prognosticator.


> it’s a bit incendiary

> I also think it overstates the case

I think it's a mistake to defend and/or "reinterpret" the hype, which is not helping to promote the technology to people who aren't bandwagoners. If anything, it drives them away. It's a red flag.

I wish you would just say to the previous commenter, hey, you appear to be exaggerating, and that's not a good idea.


I didn't read the comment as such a direct analogy. It was more recalling a lesson of history that maybe doesn't repeat but probably will rhyme.

The App Store reshuffled the deck. Some people recognized that and took advantage of the decalcification. Some of them did well.

You've recognized some implications of the reshuffle that's currently underway. Maybe you're right that there's a bias toward the LLM vendors. But among all of it, is there a niche you can exploit?


It doesn’t matter how fast you run if it’s not the correct direction.

Good LLM wielders run in widening circles and get to the goal faster than good old school programmers running in a straight line

I try to avoid LLMs as much as I can in my role as SWE. I'm not ideologically opposed to switching, I just don't have any pressing need.

There are people I work with who are deep in the AI ecosystem and it's obvious what tools they're using It would not be uncharitable in any way to characterize their work as pure slop that doesn't work, buggy, untested adequately, etc.

The moment I start to feel behind I'll gladly start adopting agentic AI tools, but as things stand now, I'm not seeing any pressing need.

Comments like these make me feel like I'm being gaslit.


We are all constantly being gaslit. People have insane amounts of money and prestige riding on this thing paying off in such a comically huge way that it can absolutely not deliver on it in the foreseeable future. Creating a constant pressing sentiment that actually You Are Being Left Behind Get On Now Now Now is the only way they can keep inflating the balloon.

If this stuff was self-evidently as useful as it's being made out to be, there would be no point in constantly trying to pressure, coax and cajole people into it. You don't need to spook people into using things that are useful, they'll do it when it makes sense.

The actual use-case of LLMs is dwarfed by the massive investment bubble it has become, and it's all riding on future gains that are so hugely inflated they will leave a crater that makes the dotcom bubble look like a pothole.


Then where is all this new and amazing software? If LLM can 10x or 100x someones output we should've seen an explosion of great software by now.

One dude with an LLM should be able to write a browser fully capable of browsing the modern web or an OS from scratch in a year, right?


That's a silly bar to ask for.

Chrome took at least a thousand man years i.e. 100 people working for 10 years.

I'm lowballing here: it's likely way, way more.

If ai gives 10x speedup, to reproduce Chrome as it is today would require 1 person working for 100 years, 10 people working for 10 years or 100 people working for 1 year.

Clearly, unrealistic bar to meet.

If you want a concrete example: https://github.com/antirez/flux2.c

Creator of Redis started this project 3 weeks ago and use Claude Code to vibe code this.

It works, it's fast and the code quality is as high as I've ever seen a C code base. Easily 1% percentile of quality.

Look at this one-shotted working implementation of jpeg decoder: https://github.com/antirez/flux2.c/commit/a14b0ff5c3b74c7660...

Now, it takes a skilled person to guide Claude Code to generate this but I have zero doubts that this was done at least 5x-10x faster than Antirez writing the same code by hand.


Ah, right, so it's a "skill issue" when GPT5.3 has no idea what is going on in a private use case.

Literally yes

I still haven’t seen those mythical LLM wielders in the wild. While I’m using tools like curl, jq, cmus, calibre, openbsd,… that has been most certainly created by those old school programmers.

> In the hands of a skilled dev, these things are massive force multipliers.

What do you get from it? Say you produce more, do you get a higher salary?

What I have seen so far is the opposite: if you don't produce more, you risk getting fired.

I am not denying that LLMs make me more productive. Just saying that they don't make me more wealthy. On the other hand, they use a ton of energy at a time where we as a society should probably know better. The way I see it, we are killing the Earth because we produce too much. LLMs help us produce more, why should we be happy?


(Imagine me posting the graph of worker productivity in the US climbing quickly over time while pay remains flat or falls)

Using these tools comes down to basically just writing what you want in a natural language. I don't think it will be a problem to catch up if they need to.

Context management, plan mode versus agent mode, skills vs system prompt, all make a huge difference and all take some time to build intuition around.

Not all that hard to learn, but waiting for things to settle down assumes things are going to settle down. Are they? When?


That these facets of use exist at all are indicative of immature product design.

These are leaked implementation details that the labs are forcing us to know because these are weak, early products and they’re still exploring the design space. The median user doesn’t want to and shouldn’t have to care about details like this.

Future products in this space won’t have them and future users won’t be left in the dust by not learning them today.

Python programmers aren’t left behind by not knowing malloc and free.


Someone will package up all that intuition and skills and I imagine people won't have to do any of these things in future.

You wait for everyone to go broke chasing whatever, and then take their work for your own. It's not that hard to copy and paste.

I only wish I was there when that cocky "skilled dev" is laid off.

> This one, in a way, is the ultimate abstraction.

Is that really true though? I hear the Mythical Man Month "no silver bullet" in my head.... It's definitely a hell of an abstraction, but I'm not sure it's the "ultimate" either. There is still essential complexity to deal with.


> If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?

To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.

Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.

They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b


All I’m going to say is if your press release is titled “an update on heroku” instead if something exciting it means you aren’t delivering happy news that is good for the user.

I bet I’m right. Haven’t read the article or comments, I’m just posting this comment to see if I’m proven right or wrong.


That is so annoying too because it basically throws away all the work the subagent did.

Another thing that annoys me is the subagents never output durable findings unless you explicitly tell their parent to prompt the subagent to “write their output to a file for later reuse” (or something like that anyway)

I have no idea how but there needs to be ways to backtrack on context while somehow also maintaining the “future context”…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: