Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What human experts do you blindly trust without double checking?


Most human experts, when asked about their area of expertise, don't parrot what some guy said as joke on Reddit five years ago.

Most lawyers, when you ask them to write a brief, will cite only real cases.


I coined the term "fancy cruise control" on reddit, as a joke, to describe Autopilot. One of the mods of the self-driving car sub thought the term was so funny he made a joke subreddit for it. A few years later Tesla lawyers invoked the term in court to downplay the capabilities of autopilot in court.


"Most" is the key word here. In my experience that's also the case for LLMs.


LLM proponents really have succeeded in moving the overton window on this discussion. "Sure, you cannot trust LLMs, but you cannot trust humans, either".


I don’t think “Overton window” works in that construction. It typically refers to the range of politically acceptable opinions.

LLMs are too new to have such a thing. It sounds like you’re an “LLM opponent” (whatever that means) who believes the appropriate standard is infallibility? I don’t even get that line of thinking, but you’re welcome to it. But let’s not pretend this is a decades-long topic with a social consensus that people try to influence.


I didn't mean overton window in a political sense (not a English native speaker). It's more about moving the goal post maybe.

> I don’t even get that line of thinking, but you’re welcome to it

I would not say "LLM oponent". Rather "LLM critic". I'm not against LLMs as a technology. I'm worried about how the technology is deployed and used, and what the consequences are. Specifically, copyright issues, power use issues, inherent biases in the traning data that strengthen existing discrimation against minorities, raciscm and sexism. I'm not convinced by the hype created by LLM proponents (mostly investors and other companies and people who financially benefit from LLMs). I'm not saying that machine learning doesn't bring any value or does not have use cases. I'm talking more about the recent AI/LLM hype.


Most of them. Are you constantly doing validation studies for every piece of information you take in? If the independent experts tell me that a new car is safe to drive, then I trust them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: