Hacker Newsnew | past | comments | ask | show | jobs | submit | kelseyfrog's commentslogin

Except the services that are intractably human: educators, judges, lawyers, social workers, personal trainers, childcare workers.

Those will suffer the Baumol effect and their prices will rise to extraordinary levels.


There's already examples of lawyers offloading work to ChatGPT even though they weren't allowed to. Also educators (and students), though if all other work is automated, what's there to educate for, and how would the prospective students pay?

Social work, childcare, for now I agree:

My expectation is that general purpose humanoid robots, being smaller than cars and needing to do a strict superset of what is needed to drive a car, happen at least a decade after self driving cars lose all of the steering wheels, and the geofences, and any remote safety drivers. And that's even with expected algorithmic improvements, if we don't get algorithmic improvements then hardware improvements alone will force this to be at least 18 years' between that level of FSD and androids.


I imagine personal trainers and childcare workers would see a drop in demand and perhaps also an increase in supply if a bunch of people suddenly lost their jobs to AI.

One would assume - if this were to happen - that supply and demand would bring prices back down, as everyone would rush to those fields.

Our increased efficiency producing manufactured goods, technology, food, and clothing has already produced this effect in healthcare, education, childcare, and more. That's how the effect works.

The only question is, are we prepared to deal with the social ramifications of the consequences? Are we ok with new crises? Imagine the current problems dialed up 10x. Are we prepared to say, "the market is in a new equilibrium, and that's ok"?


Healthcare, education and childcare are either free or affordable in almost all developed countries.

Even in places where these services are expensive, it does not seem to be because the workers are highly paid.


They are not free, they are paid for by taxes. And in pretty much all countries, irrespective of funding model, these services have increased in price much faster than general inflation. This is the Baumol effect in action.

The best educator I’ve ever had is ChatGPT.

How scalable is that in the sense that teachers have been obsoleted and we can run zero-staff schools?

The big tech AI barons absolutely claim that their LLMs can replace educators, judges, lawyers, and personal trainers. I've seen some vague claims about childcare robots, but for whatever reasons anything that's not pure software appears to be currently outside their field of vision. They're unlikely to make any claims about social workers because there's not enough money in it.

No; the services that seem most intractably human, at least given the current state of things, are very much those in personal care roles—nurses, elder care workers, similar sorts of on-the-ground, in-person medical/emotional care—and trades, like plumbing, construction, electrical work, handcrafts, etc.

Until we start seeing high-quality general-purpose robots (whether they're humanoid or not), those seem likely to be the jobs safest from direct attempts to replace them with LLMs. That doesn't mean they'll be safe from the overall economic fallout, of course, nor that the attempts to replace knowledge work of all types will actually succeed in a meaningful way.


It's like banning children from owning and carrying handguns. They still have knives and ultimately fists. We cannot eliminate harms, therefore we should not attempt to reduce harms.

Same thing with T2D. If your blood sugar is disregulated due to insulin sensitivity disorder, you should simply die.

I wouldn't go that far. Other, less invasive treatments should still be available IMHO, but there should remain an element of personal accountability. Gene editing is a very powerful tool, and messing with complex systems in powerful ways that we don't fully understand could be a recipe for many troubles down the line. I think the use of gene editing should be very surgically applied to obviously detrimental mutations, not for some scatter gun like approach.

What if the body raising cholesterol levels serves some purpose we aren't yet aware of? I've heard there's some evidence that medication to reduce blood pressure has a potential link to the onset of Parkinson's disease. Maybe messing with blood pressure in that way without addressing underlying causes has been a mistake, and messing with cholesterol levels without addressing underlying causes could also be.


That is why we do long and expensive trials before approving any medication for use.

Having said that, we have we been medically lowering people's cholesterol levels for decades, and the evidence seems pretty clear at this point that it is a net health benefit to those for whom treatment is indicated.

It is not at all obvious that targeted gene editing would be more disruptive to the body compared to flooding the body with a drug that happens to interfere with the one part of the process that we found a drug to interfere with.

Particularly if we are editing the gene to match a form that is already present in much of the population.


Some issues could only become evident over a period of hundreds of years with gene editing. That's longer than any medical trial I'm aware of. And mistakes made would be difficult, if not impossible, to undo.

If medications can already do what's required for cholesterol issues, why wouldn't we continue to use them rather than making some change to affect a complex balance that could cause problems over very long timescales?

If we were to be editing a specific gene to match what the wider population has, then I'd be more ok with that.


We should give or deny medical treatment based on our personal values, such as responsibility.

Medical ethics boards already do such things don't they?

You're confused. Maybe do some research on ethics boards?

If you think so, what sources would you recommend? According to Wikipedia on medical ethics, "These values include the respect for autonomy". Not expecting any level of self control doesn't show respect for autonomy IMHO.

The AMA Principles of Medical Ethics[1] is a good starting point.

Principle nine reads as;

> IX. A physician shall support access to medical care for all people.

I can't in good conscious find withholding care due to the value of personal responsibility with supporting access to medical care for all people.

1. https://code-medical-ethics.ama-assn.org/principles


Interesting. I'll look into that. The Hippocratic oath says that a physician should do no harm (ἐπὶ δηλήσει δὲ καὶ ἀδικίῃ εἴρξειν). It's a personal value judgement as to whether some intervention is providing medical care or causing harm. I consider reckless genetic modification to be causing harm.

> The Hippocratic oath says that a physician should do no harm

No it doesn't. Such a standard would make the practice of medicine impossible, as all treatments have some risk of harm.

What is relevant to this discussion is the below excerpt of the modern form of the oath:

> I will apply, for the benefit of the sick, all measures which are required, avoiding those twin traps of overtreatment and therapeutic nihilism.


And depression as well, you gotta just buck up and smile and try real hard to not jump off of a bridge.

Right, all mental health problems are a choice (or a call for attention).

> How can you get a machine to have values?

The short answer is a reward function. The long answer is the alignment problem.

Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.


Well, it’s pretty clear to me that the current reward function of profit maximization has a lot of down sides that aren’t sufficiently taken into account.

The only thing worse than it is anything else-maximisation.

That's an incredibly easy thing to change. It takes little money, a flared base, and only mild curiosity.

Intellectual property is ontologically incoherent. Stealing IP isn't possible because IP is a legal construct, not something that exists in the natural world nor in reality.

I'm curious why you have a licensing agreement in your "about" if IP isn't real.

So I can do very silly things with the law when my comments end up used for commercial purposes.

The point is to highlight the contradiction, not to avoid it.


Algorithmic breakthroughs (increases in efficiency) risk Jevons Paradox. More efficient processes make deploying them even more cost effective and increases demand.

2002, ideally.

Or the Bancor[1], like how Bretton Woods was originally envisioned.

1. https://en.wikipedia.org/wiki/Bancor


The "Council of models" is a good first step, but ultimately I found myself settling on an automated talent acquisition pipeline.

I have a BIRTHING_POOL.md that combines the best AGENTS.md and introduces random AI-generated mutations and deletions. The candidates are tested using take-home PRs which are reviewed by HR.md and TECH_MANAGER.md. TECH_MANAGER.md measures completion rate per tokens (effectiveness) and then sends the stack ranking of AGENT.mds to HR to manage the talent pool. If agent effectiveness drops low enough, we pull from the birthing pool and interview more candidates.

The end result is that it effectively manages a wider range of agent talents and you don't get into these agent hive mind spirals you get if every worker has the same system prompt.


Is this satire? I can't tell any more.

I sincerely hope so. Gas Towns, Birthing Pools, Ralf Wiggums. Some people seems to have lost any sense of reality.

I don't understand why people try so hard to anthropormophize these tools and map them to human sociology...

It's all abstractions to help your brain understand what electrons are doing in impossibly pure sand. Pick the one that frees the most overhead to think about problems that matter

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: