Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you address this problem with people? More than once a real live person has told me something that was wrong,


You can divide your approach to asking questions with people (and I do believe this is something people do):

1. You ask someone you can trust for facts and opinions on topics, but you keep in mind that the answer might only be right in 90% of the cases. Also people tend to tell you if the are not sure.

2. For answers you need to rely on you ask people who are legally or professionally responsible if they give you wrong advice: doctors, lawyers, car mechanics, the police etc.

ChatGPT can‘t lose it‘s job if it informs you incorrectly.


If ChatGPT keeps giving you wrong answers wouldn’t this make paying customers leave? Effectively “losing its job”. But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.


There was an article here just a few days ago, which discussed how firms can be ineffective, and still remain competitive.

https://danluu.com/nothing-works/

The idea that competition is effective, is often in spherical cow territory.

There’s tons of real world conditions which can easily let a firm be terrible at their core competency, and still survive.


> But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.

I have had language models tell me it doesn't know. Usually when using a RAG-based system like Perplexity, but they can say they don't know when prompted properly.


I've seen Perplexity misrepresent search results and also interpret them differently depending on whether GPT4o or Claude Sonnett 3.5 are being used.


I'm not sure about your local laws, but at least in Lithuania it's completely legal to give a wrong advice (by accident, of course)... Even a notary specialist would at most get to pay a larger insurance payment for a while, because human errors falls under professional insurance.


You are contradicting yourself. If the notary specialist needs insurance then there's a legal liability they are insuring against.

If you had written "notaries don't even get insurance because giving bad advice is not something you can be sued for" you would be consistent.


Experience. If I recognize they give unreliable answers on a specific topic I don’t question them anymore on that topic.

If they lie on purpose I don’t ask them anything anymore.

The real experts give reliable answers, LLMs don’t.

The same question can yield different results.


So LLMs are unreliable experts, okay. They're still useful if you understand their particular flavor of unreliability (basically, they're way too enthusiastic) - but more importantly, I bet you have exactly zero human experts on speed dial.

Most people don't even know any experts personally, much less have one they could call for help on demand. Meanwhile, the unreliable, occasionally tripping pseudo-experts named GPT-4 and Claude are equally unreliably-expert in every domain of interest known to humanity, and don't mind me shoving a random 100-pages long PDF in their face in the middle of the night - they'll still happily answer within seconds, and the whole session costs me fractions of a cent, so I can ask for a second, and third, and tenth opinion, and then a meta-opinion, and then compare&contrast with search results, and they don't mind that either.

There's lots to LLMs that more than compensates for their inherent unreliability.


> Most people don't even know any experts personally, much less have one they could call for help on demand.

Most people can read original sources.


Which sources? How do I know I can trust the sources that I found?


They can, but they usually don't, unless forced to.

(Incidentally, not that different from LLMs, once again.)


How do you even know what original sources to read?


There's something called bibliography at the end of every serious books.


I am recalling CGP Grey's descent into madness due to actually following such trails through historical archives: https://www.youtube.com/watch?v=qEV9qoup2mQ

Kurzgesagt had something along the same lines: https://www.youtube.com/watch?v=bgo7rm5Maqg


And yet here you are making an unsourced claim. Should I trust your assertion of “most”?


It's not that black and white. I know of no single person who is correct all the time. And if I would know such person, i still would not be sure, since he would outsmart me.

I trust some LLMs more than most people because their BS rate is much much lower than most people I know.

For my work, that is easy to verify. Just try out the code, try out the tool or read more about the scientific topic. Ask more questions around it if needed. In the end it all just works and that's an amazing accomplishment. There's no way back.


In my experience hesitating to answer questions because of the complexity of involved material is a strong indicator of genuine expertise linked with conscientiousness. Careless bullshitters like LLMs don't exhibit this behavior.


I can draw on my past experience of interacting with the person to assign a probability to their answer being correct. Every single person in the world does this in every single human interaction they partake in, usually subconsciously.

I can't do this with an LLM because it does not have identity and may make random mistakes.

LLMs also lack the ability to say "I don't know", which my fellow humans have.


It’s trivial to address this.

You ask an actual expert.

I don’t treat any water cooler conversation as accurate. It’s for fun and socializing.


Asking an expert is only trivial if you have access to an expert to ask!


And can judge which one is an expert and which one is bullshiting for the consultancy fee.


And as we've seen in last few years, large chunks of population do not trust experts.

Think this thread has gone from "how to Trust AI", to "how do we Trust Anything".


This is a true statement.

This is also not related to the problem being trivialized in the presented solution.

Lack of access to experts, doesn’t improve the quality of water cooler conversations.


Well if you’re a sensible person, you stop treating them as subject matter expert


and people just don't know what they don't know - they just answer sillyness the same way




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: