Hacker Newsnew | past | comments | ask | show | jobs | submit | dawnbreez's commentslogin

Throwing my two cents in here...I think there's a disconnect between what AI advocates want, and what everyone else wants.

The arguments against genAI tend to point out things like: 1. Its output is unreliable at best 2. That output often looks correct to an untrained eye and requires expert intervention to catch serious mistakes 3. The process automates away a task that many people rely on for income

And the response from genAI advocates tends to be dismissive...and I suspect it is, in part, because that last point is a positive for many advocates of genAI. Nobody wants to say it out loud, but when someone on Reddit or similar claims that even a 10% success rate outweighs the 90% failure rate, what they mean is most likely "A machine that works 10% of the time is better than a programmer who works 60-80% of the time because the machine is more than 6-to-8-times cheaper than the programmer".

There's also the classic line about how automation tends to create more jobs in the future than it destroys now, which itself is a source of big disconnects between pro-genAI and anti-genAI crowds--because it ignores a glaring issue: Just because there's gonna be more jobs in the future, doesn't mean I can pay rent with no job tomorrow!

"You can write an effective coding agent in a week" doesn't reassure people because it doesn't address their concerns. You can't persuade someone that genAI isn't a problem by arguing that you can easily deploy it, because part of the concern is that you can easily deploy it. Also, "you’re not doing what the AI boosters are doing" is flat-out incorrect, at least if you're looking at the same AI boosters I am--most of the people I've seen who claim to be using generated code say they're doing it with Claude, which--to my knowledge--is just an LLM, albeit a particularly advanced one. I won't pretend this is anything but anecdata, but I do engage with people who aren't in the "genAI is evil" camp, and...they use Claude for their programming assistance.

"LLMs can write a large fraction of all the tedious code you’ll ever need to write" further reinforces this disconnect. This is exactly why people think this tech is a problem.

The entire section on "But you have no idea what the code is!" falls apart the moment you consider real-world cases, such as [CVE-2025-4143](https://nvd.nist.gov/vuln/detail/cve-2025-4143), where a programmer who is a self-described expert working with Claude--who emphasizes that he checked over the results with a fine-toothed comb, and that he did this to validate his own skepticism about genAI!--missed a fundamental mistake in implementing OAuth that has been common knowledge for a long while. The author is correct in that reading other people's code is part of the job...but this is difficult enough when the thing that wrote the code can be asked about its methods, and despite advances in giving LLMs a sort of train of thought, the fact remains that LLMs are designed to output things that "look truth-y", not things that are logically consistent. (Ah, but we're not talking about LLMs, even though kentonv tells us that he just used an LLM. We're talking about agentic systems. No true AI booster would "just" use an LLM...)

I actually agree with the point about how the language can catch and point out some of the errors caused by hallucination, but...I can generate bad function signatures just fine on my own, thank you! :P In all seriousness, this addresses basically nothing about the actual point. The problem with hallucination in a setting like this isn't "the AI comes up with a function that doesn't exist", that's what I'm doing when I write code. The problem with hallucination is that sometimes that function which doesn't exist is my RSA implementation, and the AI 'helpfully' writes an RSA implementation for me, a thing that you should never fucking do because cryptography is an incredibly complex thing that's easy to fuck up and hard to audit, and you really ought to just use a library...a thing you [also shouldn't leave up to your AI.](https://www.theregister.com/2025/04/12/ai_code_suggestions_s...) You can't fix that with a language feature, aside from having a really good cryptography library built into the language itself, and as much as I'd love to have a library for literally everything I might want to do in a language...that's not really feasible.

"Does an intern cost $20/month? Because that’s what Cursor.ai costs," says the blog author, as if that's supposed to reassure me. I'm an intern. My primary job responsibility is getting better at programming so I can help with the more advanced things my employer is working on (for the record, these thoughts are my own and not those of my employer). It does not make me happy to know that Cursor.ai can replace me. This also doesn't address the problem that, frankly, large corporations aren't going to replace junior developers with these tools; they're going to replace senior developers, because senior developers cost more. Does a senior engineer cost 20 dollars a month? Because that's what Cursor.ai costs!

...and the claim that open source is just as responsible for taking jobs is baffling. "We used to pay good money for databases" is not an epic own, it is a whole other fucking problem. The people working on FOSS software are in fact very frustrated with the way large corporations use their tools without donating so much as a single red cent! This is a serious problem! You know that XKCD about the whole internet being held up by a project maintained by a single person in his free time? That's what you're complaining about! And that guy would love to be paid to write code that someone can actually fucking audit, but nobody will pay him for it, and instead of recognizing that the guy ought to be supported, you argue that this is proof that nobody else deserves to be supported. I'm trying to steelman this blogpost, I really am, but dude, you fundamentally have this point backwards.

I hope this helps others understand why this blogpost doesn't actually address any of my concerns, or the concerns of other people I know. That's kind of the best I can hope for here.


> 1. Its output is unreliable at best

> 2. That output often looks correct to an untrained eye and requires expert intervention to catch serious mistakes

The thing is this is true of humans too.

I review a lot of human code. I could easily imagine a junior engineer creating CVE-2025-4143. I've seen worse.

Would that bug have happened if I had written the code myself? Not sure, I'd like to think "no", but the point is moot anyway: I would not have personally been the one to write that code by hand. It likely would have gone to someone more junior on the team, and I would have reviewed their code, and I might have forgotten to check for this all the same.

In short, whether it's humans or AI writing the code, it was my job to have reviewed the code carefully, and unfortunately I missed here. That's really entirely on me. (It's particularly frustrating for me as this particular bug was on my list of things to check for and somehow I didn't.)

> 3. The process automates away a task that many people rely on for income

At Cloudflare, at least, we always have 10x more stuff we want to work on then we have engineers to work on it. The number of engineers we can hire is basically dictated by revenue. If each engineer is more productive, though, then we can ship features faster, which hopefully leads to revenue growing faster. Which means we hire more engineers.

I realize this is not going to be true everywhere, but in my particular case, I'm confident saying that my use of AI did not cause any loss of income for human engineers, and likely actually increased it.


I mean, fair. It's true that humans aren't that great at writing code that can't be exploited, and the blogpost makes this point too: between a junior engineer's output and an LLM's output, the LLM does the same thing for cheaper.

I would argue that a junior engineer has a more valuable feature--the ability to ask that junior engineer questions after the fact, and ideally the ability to learn and eventually become a senior engineer--but if you're looking at just the cost of a junior engineer doing junior engineer things...yeah, no, the LLM does it more efficiently. If you assume that the goal is to write code cheaper, LLMs win.

However, I'd like to point out--again--that this isn't going to be used to replace junior engineers, it's going to be used to replace senior engineers. Senior engineers cost more than junior engineers; if you want each engineer to be more productive per-dollar (and assume, like many shareholders do, that software engineers are fungible) then the smart thing to do is replace the more costly engineer. After all, the whole point of AI is to be smart enough to automate things, right?

You and I understand that a senior engineer's job is very different from a junior engineer's job, but a stockholder doesn't--because a stockholder only needs to know how finance works to be a successful stockholder. Furthermore, the stockholder's goal is simply to make as much money as possible per quarter--partly because he can just walk out if the company starts going under, often with a bigger "severance package" than any of the engineers in the company. The incentives are lined up not only for the stockholder to not know why getting rid of senior engineers is a bad idea, but to not care. Were I in your position, I would be worried about losing my job, not because I didn't catch the issue, but because

Aside: Honestly, I don't really blame you for getting caught out by that bug. I'm by no means an expert on anything to do with OAuth, but it looks like the kind of thing that's a nightmare to catch, because it's misbehavior under the kind of conditions that are--well, only seen when maliciously crafted. If it wasn't something that was known about since the RFC, it would probably have taken a lot longer for someone to find it.


Luckily, shareholders do not decide who to hire and fire. The actual officers of the company, hopefully, understand why senior engineers are non-fungible. Tech companies, at least, seem to understand this well. (I do think a lot of non-tech companies that nevertheless have software in their business get this wrong, and that's why we see a lot of terrible software out there. E.g. most car companies.)

As for junior engineers, despite the comparisons in coding skill level, I don't think most people are suggesting that AI should replace junior engineers. There are a lot of things humans do which AI still can't, such as seeing the bigger picture that the code is meant to implement, and also, as you note, learning over time. An LLM's consciousness ends with its context window.


Apparently I missed the end of a sentence near the end there. "But because" on the fourth paragraph is supposed to be "but because the sales pitch is that the machine can replace me". Oops.


What if the employer was charged a subscription fee, instead of waiting until they find a hire to charge? Then the incentive is for the company to find a hire quickly, since they're now paying per month instead of per head. Some companies will balk at this, obviously, since existing services charge them per head and they don't know how fast a viable hire will appear...but I'd guess that this would be highly effective at weeding out 'employers' who have no interest in employing any new hires.

(Also, seconding the part about grifts where a candidate is charged upfront. Charging upfront for access to training materials, equipment, or some kind of licensing agreement is often a sign that you're about to get roped into a multi-level marketing scam.)


having been on the employers side, i agree with that idea because i think it aligns incentives a bit better. with paying per head only, the recruiters are motivated to get me to accept the first candidate they can find with the least amount of work. with a monthly subscription the recruiter can take more time to find a good candidate because it won't hurt their bottom line.


This sounds like a productive solution, per my reply to @tmn:

> My understanding is that employers already do pay to post on LI, indeed, etc. It simply works out that paying for ghost jobs is worth it, so the enforcement mechanism needs to be a ban/other thing that doesn't rely on pricing

I didn’t realize the LI, indeed, etc. Sites were charging per-head. Could this website be as simple as “an Indeed that charges employers per-month, to disincentivize ghost postings”?

I think the chicken-and-egg problem is a bigger deal than the dating-app problem; while it’s true the best candidates find jobs quickly, they don’t necessarily stop looking. That said if there’s no postings by employers then you’re dead in the water. How could the first 10 employers be recruited?


Not to mention the influence of companies like Intuit, which has been pushing against convenience in tax filing for decades because its entire business model relies on people being unwilling or unable to file by themselves: https://www.propublica.org/article/inside-turbotax-20-year-f...


The problem isn't that the cost of renting is higher than the cost of mortgaging, the problem is that banks treat people who have to make a $2,500 monthly payment to live decently as if they can't be trusted with a $2,000 payment. People like me are charged a premium for shelter because we did things like...be financially responsible and avoid taking on unnecessary loans or credit card debt. And when we ask why we can't have the cheaper option, the response is...the bank doesn't trust us to make a payment that's _easier_ to make than the payment I'm working with right now?

It's blatant nonsense, which makes one wonder what the real reasoning is.


I think an insolvent renter is probably a smaller risk than an insolvent loanee, at least with certain assumptions. I'm not defending it, and have no background in it, but it seems possible.


But we're not talking about an insolvent renter, we're talking about a renter who's making their payments. A renter who makes payments every month at 2500 can, and regularly is, not approved for a mortgage at 2000 a month, even though--based on the payment history--the renter would not only be able to make those payments, the mortgage payment would be easier for them to make--and therefore less risky--than the rent payment.


Doesn't he cite several solutions that already exist and have been demonstrated? For instance, he cites voting systems that have already been implemented in two US states, as well as a video describing the mathematical proof of how that voting system affects the outcomes of elections compared to a simple first-past-the-post vote. That's not "not simple or effective", that's "mathematically proven to be effective".


It's not that it costs too much money, in many cases. The money's there. The problem is that the solution is not profitable compared to other actions, which is a different thing entirely.

For example, it is much more profitable to sell recurring treatments than it is to sell cures. You can only sell a cure once per instance of disease, after all, whereas selling a recurring treatment means you have an indefinite stream of revenue. (Similar logic applies to selling software as a subscription instead of as a one-off license.)

When something is "not profitable", that does not mean it costs too much. It means that the thing is not as profitable as other options, which is sometimes a result of excessively costly processes, but is at other times a result of having "do nothing" as an option (which by definition costs nothing, and depending on the field can make quite a lot of money).

Another example of something being "not profitable", but not because it costs more, is public goods. Public transportation, for instance. It is undeniable that good public transportation is a boon to society, but public transport is often framed as a business rather than a public good, and in the context of a business, public transport is simply not as profitable as something like a toll road. Running a toll road is close to "do nothing" compared to running a bus line or train line, since everyone brings their own car instead of using the publicly-provided transportation. The catch is, this is actually MORE costly overall--because many more vehicles have to be fueled and maintained, and cars are relatively inefficient compared to trains. But, because the cost is distributed to the users of the road, the toll road is "cheaper" for the people operating it, and thus more profitable--so if you run government like a business, the toll road is the thing you go with.


"Non-profit" does not imply nobody gets paid. https://donorbox.org/nonprofit-blog/guide-to-nonprofit-salar...


That's my point. Money still has to come from somewhere. Being a non-profit doesn't solve the problem of things costing money.


"Non-profit" also does not imply that money isn't coming in from donations, grants, or fees.


IIRC, the companies that stopped advertising with Twitter had asked about whether Elon would handle the Trust & Safety team correctly at a conference right when Elon was preparing to take over as CEO.

The official response was to dodge the question.

Not only can Elon pin blame on the research agency for his own explicit actions, he can't pin blame on the research agency for an exodus that started before the research agency released its findings. It would be like breaking your foot, going on a rollercoaster with a broken foot, and then suing the theme park for breaking your foot. I'm not a lawyer, but in any sane court, that wouldn't hold water.


He explicitly tweeted that he would create such a council, and block anyone being added back to twitter before that council's approval.

Of course that was a direct lie and Trump (among other even less savory people) was added back shortly after.

Elon sycophants on twitter actively don't give a fuck because they WANT such a hostile and disgusting place, where being an asshole is rewarded, suggesting that human beings are bad for the way they were born is encouraged, and little bully cliques are the norm, because they want /pol/ but are butthurt that 4chan defaults to anonymity and won't let you build up a fanbase for your hatred.


> This sexual behaviour statistically does not create STD spread, because of negative feedback...

You know that it's possible to infect someone while also impregnating them, right? In fact, this leads to more STD spread than non-procreative sex, because the child then runs the risk of infection as well. There is no 'negative feedback' here beyond "being upset that your partner infected you", which is already a factor in non-procreative sex.


yes of course. this is about negative feedback (https://en.wikipedia.org/wiki/Negative_feedback) in terms of statistical infection. People who infect each other and who will despite remain loyal to each other will not spread the infection. The child might and as well might not get infected (https://www.who.int/teams/global-hiv-hepatitis-and-stis-prog...) but this is transmission to the next generation. Also HIV positive parents could consider refraining from further procreation, once they noticed it (the infection), while maintaining their closeness and loyalty. This could break the infection positive feedback loop.


You seem to be under the impression that gay people don't get married.


What are you talking about? The comment you’re replying to doesn’t mention anything about gay people at all.

It’s 2023, yes we know gay people get married.


Parent comment's argument is that married heterosexual couples are less likely to spread HIV, on the grounds that a married couple won't spread HIV outside of that pairing if they remain faithful to each other.

Begging the question...gay people can be monogamous too, does that not count for whatever reason?


Using punishments and taboos to control these things does not work. You're not incentivizing 'good behavior', you're incentivizing not getting caught--not to mention that, inevitably, the power structure that forms around these taboos starts rewarding people who punish others with the ability to do the taboo themselves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: