Hacker Newsnew | past | comments | ask | show | jobs | submit | singularfutur's commentslogin

Anthropic optimized for "clean UI" metrics and forgot developers care more about not having their codebase silently corrupted. Every AI company relearns the same lesson: autonomy is the enemy of trust.

Moving the project to a foundation is smart. Most AI tools die when the founder leaves. This one might actually survive.

Reverting a few trivial commits because of purity tests is a bad precedent. It rewards the loudest commenters and punishes maintainers.

It will be a painful decade until those who have already lost this weird ideological war ever realize it.

And which side is that? I mean, from my point of view, it seems like it’s probably the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible, rather than using a bloody library.

(For whatever reason, LLM coding things seem to love to reinvent the square wheel…)


> the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible

Gee, I wonder which "side" you're on?

It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.

The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?

Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?


Oh, yeah, I make no secret of which side I’m on there.

I mean I don’t have a problem with AI driven code completion as such, but IME it is pretty much always worse than good deterministic code completion, and tends to imagine the functions which might exist rather than the functions which actually do. I’ve periodically tried it, but always ended up turning it off as more trouble than it’s worth, and going back to proper code completion.

LLM code reviews, I have not had the pleasure. Inclined to be down on them; it’s the same problem as an aircraft or ship autopilot. It will encourage reduced vigilance by the human reviewer. LLM assisted tests seem like a fairly terrible idea; again, you’ve got the vigilance issue, and also IME they produce a lot of junk tests which mostly test the mocking framework rather than anything else.


LLM code reviews are completely and utterly worthless.

I do like using them for writing tests, but you really have to be careful. Still, i prefer it to doing all the testing by hand.

But for like, the actual code? I'll have it show me how to do something occasionally, or help me debug, but it really just can't create truly quality, reliable code.


I’m not sure where you’ve been the last four years, but we’ve come a long way from GPT 3.5. There is a good chance your work environment does not permit the use of helpful tools. This is normal.

I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?


Dependencies aren't free. If you have a library that has less than a thousand lines of code total that is really janky. Sometimes it makes sense like PicoHTTPParser but it often doesn't.

Left-pad isn't a success story to be reproduced.


Not saying left pad is a good idea; I’m not a Javascript programmer, but my impression has always been that it desperately needs something along the lines of boost/apache commons etc.

EDIT: I do wonder if some of the enthusiastic acceptance of this stuff is down to the extreme terribleness of the javascript ecosystem, tbh. LLM output may actually beat leftpad (beyond the security issues and the absurdity of having a library specifically to left pad things, it at least used to be rather badly implemented), but a more robust library ecosystem, as exists for pretty much all other languages, not so much.


Left-pad was plain bad, we already had well-known/tested/reliable utility libraries like lodash that had it among the functions they provided.

The magic robot will get better. The complainers won't.

So, first of all “but it’ll get better” has been the AI refrain since the 1950s. Voice recognition rapidly went from “doesn’t work at all” to “kinda works” in the 80s-90s, say, and in recent years has reached the heady heights of ‘somewhat useful’, though you still wouldn’t necessarily trust your life to it.

But also… okay, so maybe AI programming tools get good enough at some point. In which case, I suppose I’ll use them then! Why would I use a bad solution preemptively on the promise of jam tomorrow? Waiting for the jam surely makes more sense.


Not once in history has new technology lost to its detractors, even if half its proponents were knuckleheads.

Web3, Google Glass, Metaverse, NFTs…

Those are products, not technologies.

Google Glass was a product. The others definitely are not.

Web3, Metaverse and NFTs all failed to stand on their own two legs as a technology. It feels fair to call them products, none of them ever attained their goal of real decentralization.

Ah, yes. That’s why we all have our meetings in the metaverse, then go back home on the Segway, to watch 3d TV and order pizza from the robotic pizza-making van (an actual silly thing that SoftBank sunk a few hundred million into). And pay for the pizza in bitcoin, obviously (in fairness, notoriously, someone did do that once).

That’s just dumb things from the last 20 years. I think you may be suffering from a fairly severe case of survivorship bias.

(If you’re willing to go back _30_ years, well, then you’re getting into the previous AI bubble. We all love expert systems, right?)


Nuclear power disagrees

Nuclear power will win (obviously). Unless you're talking about nuclear weapon.

latest counter-example is NFT.

NFTs lost because they didn't do anything useful for their proponents, not because people were critical of them. They would've fizzled out even without detractors for that reason.

On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.

Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.


oh it's "because of this and that" now?

The orignal topic was "not once blah blah...". I don't have to entertain you further, and won't.


Okay, but during the NFT period, HN was trying to convince me that they were The Future. Same with metaverses, same with Bitcoin. I mean, okay, it is Different this time, so we are told. But there’s a boy who cried wolf aspect to all this, y’know?

Baseline assumption: HN is full of people who assume that the current fad is the future. It is kind of ground zero for that. My HN account is about 20 years old and the zeitgeist has been right like once.


moving the goalposts

This sort of purity policing happens to other open source mission driven projects. The same thing happens to Firefox. Open source projects risk spending all their time trying to satisfy a fundamentally extreme minority, while the big commercial projects act with impunity.

It seems like it is hard to cultivate a community that cares about doing the right thing, but is focused and pragmatic about it.


what if the users legitimately don't want AI written software?

You have to think twice if you really want to cater to these 'legitimate users' then. In Steam's review section you can find people give negative reviews just because the game uses Unity or Unreal. Should devs cater to them and develop their in-house engine?

maybe? devs should weigh the feedback and decide what they think will best serve the project. open source is, especially, always in conversation with the community of both users and developers.

> open source is, especially, always in conversation with the community of both users and developers

Not necessarily. sqlite doesn't take outside contributions, and seems to not care too much about external opinion (at least, along certain dimensions). sqlite is also coincidentally a great piece of software.


Then they have the right to not use it: Stoat does not have a monopoly on chat software.

Then they can go and use software that's not AI written.

And then you have the "Alas, the sheer fact that LLM slop-code has touched it at all is bound to be a black stain on its record" comments.

maybe a preview of what's to come when the legal system rules the plagiarism machine's output is a derivative work?

Since a human can also be a "plagiarism machine" (it's a potential copyright violation for both me and an LLM alike to create images of Mickey Mouse for commercial uses) it'll matter exactly what the output is, won't it?

Looks polished. For me the switching cost is real since Screen Studio already works. If you can win on editing speed, captions, and clean exports that stay portable, there is definitely room for a new tool.

Thanks! We're working on that!

Humans did the actual work: framing the problem, computing base cases, verifying results. GPT just refactored a formula. That's a compiler's job, not a physicist's. Stop letting marketing write science headlines.

Every release they claim it writes production code but my team still spends hours fixing subtle bugs the model introduces. The demos are cherry picked and the real world failure rate is way higher than anyone admits. Meanwhile we keep feeding them our codebases for free training data.

How would that compare to subtle bugs introduced by developers? I have seen a massive amount of bugs during my career, many of those introduced by me.

it compares... unfavorably, on the side of ai

Not from what I'm seeing it. 5.3 codex xhigh is pretty amazing.

COSS companies want it both ways. Free community contributions and bug reports during the growth phase. Then closed source once they've captured enough users. The code you run today belongs to you. The roadmap belongs to their investors.

Duolingo used unpaid labour to build its resources. Now it charges money for premium

Dark patterns are just polite robbery by corporations that realized psychological manipulation pays better than service. The grift is the product, not the bug.

AI companies dumped this mess on open source maintainers and walked away. Now we are supposed to thank them for breaking our workflows while they sell the solution back to us.

Funny how AI is an "agent" when it demos well for investors but just "software" when it harasses maintainers. Companies want all the hype with none of the accountability.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: