Hacker Newsnew | past | comments | ask | show | jobs | submit | Refreeze5224's commentslogin

It's simple: it's just a lie. We are seeing the goal of AI in action here, which is reducing payroll costs.

Since the rush for AGI isn’t panning out, I can see tech firms engaging in tacit collusion that aims to reduce the salaries of software engineers.

There’s proof of tech firms engaging in explicit collusion back in the 00’s.


I also don't understand that take.

Imagine you run a mowing service with 4 employees. Suddenly 2 more people volunteer to mow yard for your company for free!

Is your reaction to fire two of the paid employees and keep mowing the same number of yards (with reduced payroll costs), or to expand the business to mow more yards?

Which of those responses feels more in line with a "strong and growing" business that is "continuing to support more customers" and has "improving profitability"?


Now imagine you run a mowing service with 4 employees. Suddenly an unbounded number of people appear on the job market who are ready to work for your company at a 5% the cost of your previous employees. Best of all, they become more competent and less expensive over time. You can't yet fire your entire original roster all at once since they need to teach the new hires the specifics of the job, but after that's done, what do you need them for?

No sane person is claiming AI can fully replace a human worker today.

It is a productivity multiplier at least for now.


> No sane person is claiming AI can fully replace a human worker today.

You must be living under a rock..


How do you square (heh) this with Jack Dorsey axing nearly half of Block's headcount?

If you apply Pareto to productivity, you can fire a lot of people and still manage

A fundamental attribute of capitalism is that labor costs are a regrettable cost center, and any reduction in labor costs without a resulting loss in profit/perceived productivity is a big win. AI is a big win for capitalists, and not so much for anyone who is now suddenly "made redundant". And since we treat shareholder value as sacred and inviolate, too bad for workers who lose their job in this deal.

This is totally true, if you ignore the entire history of taxation in the United States during the 19th and 20th centuries.

Yeah, screw Robert Reich! Always looking out for the workers who make up the majority of this country. Why won't he look out for the poor multi-national corporations, who have no one to advocate for them or their tax rates?

Hey, he can advocate for whatever causes he likes. I just think honesty makes a more compelling argument than lies.

> Always looking out for the workers

How is spreading misinformation looking out for the workers?


If anything it hurts the workers because now people won't listen to him anymore

That's because you are intentionally not included in it. Only him and his rich owning class buddies are, the rest of us are only profit-generating NPCs.

Epstein class fits here, might as well use it.

Kinda does a number on the whole "literal word of god" thing doesn't it?

Hey, if the KJV was good enough for Paul and the Apostles, it’s good enough for me

SCOTUS rules for the rich and powerful. Most of the time Trump is aligned with them. Sometimes he does dumb shit like tariffs, or things that upset the order the rich and powerful want to maintain, and they rule against him.

Constant fear.

> The boring parts where you learn.

Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.

You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.


I actually learn quite a bit from my AI vibe coding and debugging. The other day I had a configuration issue in my codebase that only happened in prod. I didn't understand it (my coworker coded it and he was busy with something else at the time). I asked AI to help and it told me why it was broken and how to fix it, and also fixed it for me.

Since the issue was due to the intersection of k8s and effect, I don't think reading a bunch of docs would have really helped.

Of course I'm sure there's plenty of people who don't care about understanding the bugs and just want to fix things fast. But understanding these bugs helps me prompt/skill the LLM to prevent them in the future.


> I asked AI to help and it told me why it was broken and how to fix it, and also fixed it for me.

This is just it, you didn't learn anything here. In 3 months, you will only remember that AI fixed some issue for you. You will have none of the knowledge and experience that struggling and thinking and googling and trying things out until it works provides. You may as well have asked some other person to fix it, and they would at least have learned something.

Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt. Which is much easier to find than someone who actually knows the what and how and why of the code. But hey, you fixed the bug, it ships, boss makes money, the company wins. But you sure don't...


> In 3 months, you will only remember that AI fixed some issue for you.

That's not exclusive to AI. I've solved plenty of bugs pre-AI that I would go down similar rabbit holes to fix again without AI. I've spent days hunting down bugs like this in the past while only remembering that I spent days on the bug, not anything meaningful. It's not something I enjoy repeating.

> Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt.

Maybe. The reality is that my coworkers of varying experience levels do attempt to vibecode/debug and are never happy with the results. I don't know what they're prompting, but it just goes to show that it's just as easy as just "typing into a prompt."

> you fixed the bug, it ships, boss makes money

Yeah, that's how it's always been, no? Boss doesn't care how it got fixed as long as it got fixed, and added points if it got fixed quickly so I can work on other features. I may not win long-term if I do use AI, but I certainly don't win short-term if I don't use AI, because I can't afford to spend days to fix a bug that AI can fix in an hour.


Your "perfect" is a massive enemy of the "good" that is GrapheneOS compared to using stock Android.

Their point is, would you be able to prompt your way to this result? No. Already trained physicists working at world-leading institutions could. So what progress have we really made here?


It's a stupid point then. Are you able to work with a world leading physicist to any significant degree? No


It's like saying: calculator drives new result in theoretical physics

(In the hands of leading experts.)


No it's not like saying that at all, which is why Open AI have a credit on the paper.


Open AI have a credit on the paper because it is marketing.


Lol Okay


And even if it were, calculators (computers) were world-changing technology when they were new.


No it’s like saying: New expert drives new results with existing experts.

The humans put in significant effort and couldn’t do it. They didn’t then crank it out with some search/match algorithm.

They tried a new technology, modeled (literally) on us as reasoners, that is only just being able to reason at their level and it did what they couldn’t.

The fact that the experts were a critical context for the model, doesn’t make the models performance any less significant. Collaborators always provide important context for each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: