Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It reminded me of a time I was at the ER for a rib injury and could see my doctor Wikipedia'ing stuff

To be honest, I'm much more comfortable with a doctor looking things up on wikipedia than using LLMs. Same with lawyers, although the stakes are lower with lawyers.

If I knew my doctor was relying on LLMs for anything beyond the trivial (RAGS or not), I'd lose a lot of trust in that doctor.



Automation bias plus the LLM failure mode (compitant, confident, and inevitable wrong) will absolutely cost lives.

I am a fan of ML, but simplicity bias and the fact that hallucinations are an intrinsic feature of LLMs is problematic.

ML is absolutely appropriate and will be useful for finding new models in medicine, but it is dangerous and negligent to blindly use, even quantification is often not analytically sufficient in this area.


That's fair, and although I disagree, I at least like that the debate has evolved from doctors vs LLMs to Wikipedia vs LLMs.

When we accept that AI is not replacing knowledge workers, the conversation changes to a more digestible and debatable one: Are LLMs useful tools for experts? And I think the answer will be a resounding: Duh


> When we accept that AI is not replacing knowledge workers

I don't accept this, personally. These tools will absolutely be replacing workers of many types. The only questions are which fields and to what degree.

> Are LLMs useful tools for experts?

I didn't think this was a question in play. Of course they can be, once experts figure out how to use them effectively. I thought the question was whether or not the cost/benefit ratio is favorable overall. Personally, I'm undecided on that question because there's not nearly enough data available to do anything but speculate.


> These tools will absolutely be replacing workers of many types

Yeah I agree with that, that's why I specified knowledge workers. I don't think it's bad if cashiers get replaced by self-checkout or if receptionists get replaced by automated agents on either end.

Emergency/police dispatchers - obviously increased sensitivity that makes it a special case, but I still think AI can eventually do the job better than a human.

Driving cars - not yet, at least not outside specific places, but probably eventually, and definitely for known routes.

Teaching yoga - maybe never, as easy as it would be to do, some people might always want an in-person experience with a human teacher and class.

But importantly - most knowledge workers can't be displaced by AI when the work entails solving problems with undocumented solutions that the AI could not have trained on yet, or any work that involves judgment and subjectivity, or that requires a credential (doctor to write the prescription, engineer to sign off on the drawing) or security clearance, authorizations, etc. There's a lot of knowledge work it can't touch.


> that's why I specified knowledge workers.

I don't think all knowledge workers are immune. Some will be, but companies are going to shed as much payroll as their customers will tolerate.

> I don't think it's bad if cashiers get replaced by self-checkout or if receptionists get replaced by automated agents on either end.

Well, it's bad for those workers. And, personally, I'd consider it bad for me. Having to use self-checkout is a much worse experience than human cashiers. Same with replacing receptionists (and etc.) with automated agents.

When people bring up these uses for LLMs, it sounds to me like they're advocating for a world that I honestly would hate to be a part of as a customer. But that's not really about LLMs as much as it's about increasing the rate of alienation in a world where we're already dangerously alienated from each other.

We need more interpersonal human interactions, not less.


> To be honest, I'm much more comfortable with a doctor looking things up on wikipedia than using LLMs. Same with lawyers, although the stakes are lower with lawyers.

Yeah, a Wikipedia using doctor could at least fix the errors on Wikipedia they spot.


>> Same with lawyers, although the stakes are lower with lawyers.

Doctors and lawyer appear to be using LLMs in fundamentally different ways. Doctors appear to use them as consultants. The LLM spits out an opinion and the Doctor decides whether to go with it or not. Doctors are still writing the drug prescriptions. Lawyers seem to be submitting LLM-generated text to courts without even editing it, which is like the Doctor handing the prescription pad to the robot.


That’s just the highly publicized failures of lawyers. There’s likely lawyers also using them discerningly and doctors using them unscrupulously, but just not as publicized.

If a doctor wrote the exact prescription an LLM outputs, how would anyone other than the LLM provider know?


It was garbage lawyers doing that. Straight up a cooley law graduate (worst law school in america - the same one Micheal Cohen attended)

The good lawyers are using LLMs without being detected because they didn't submit it verbatim without verification.


I’m less concerned about how trained professionals use LLMs than I am about untrained folks using them to be a DIY doctor/lawyer.

Luckily doctoring has the safeguard that you will need a professional to get drugs/treatments, but there isn't as much of a safety net for lawyering


>> safety net for lawyering

There are some nets, but they aren't as official. The lawyer version of a Doctor's prescription pad is the ability to send threatening letters on law firm letterhead. Lawyers are also afforded privilege's in jails and prisons, things like non-monitored phone calls, that aren't made available to non-lawyers.


but there's no safety net for things that are outside the justice system (e.g., "is this a fair contract?") or things the aren't in the justice system yet (e.g., "am i allowed to cut down my neighbor's tree if it blocks my view?")


As there are no safety nets for people who want to perform their own surgery on themselves or take de-worming meds instead of getting a vaccination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: