You can divide your approach to asking questions with people (and I do believe this is something people do):
1. You ask someone you can trust for facts and opinions on topics, but you keep in mind that the answer might only be right in 90% of the cases. Also people tend to tell you if the are not sure.
2. For answers you need to rely on you ask people who are legally or professionally responsible if they give you wrong advice: doctors, lawyers, car mechanics, the police etc.
ChatGPT can‘t lose it‘s job if it informs you incorrectly.
If ChatGPT keeps giving you wrong answers wouldn’t this make paying customers leave? Effectively “losing its job”. But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.
> But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.
I have had language models tell me it doesn't know. Usually when using a RAG-based system like Perplexity, but they can say they don't know when prompted properly.
I'm not sure about your local laws, but at least in Lithuania it's completely legal to give a wrong advice (by accident, of course)... Even a notary specialist would at most get to pay a larger insurance payment for a while, because human errors falls under professional insurance.
So LLMs are unreliable experts, okay. They're still useful if you understand their particular flavor of unreliability (basically, they're way too enthusiastic) - but more importantly, I bet you have exactly zero human experts on speed dial.
Most people don't even know any experts personally, much less have one they could call for help on demand. Meanwhile, the unreliable, occasionally tripping pseudo-experts named GPT-4 and Claude are equally unreliably-expert in every domain of interest known to humanity, and don't mind me shoving a random 100-pages long PDF in their face in the middle of the night - they'll still happily answer within seconds, and the whole session costs me fractions of a cent, so I can ask for a second, and third, and tenth opinion, and then a meta-opinion, and then compare&contrast with search results, and they don't mind that either.
There's lots to LLMs that more than compensates for their inherent unreliability.
It's not that black and white. I know of no single person who is correct all the time. And if I would know such person, i still would not be sure, since he would outsmart me.
I trust some LLMs more than most people because their BS rate is much much lower than most people I know.
For my work, that is easy to verify. Just try out the code, try out the tool or read more about the scientific topic. Ask more questions around it if needed. In the end it all just works and that's an amazing accomplishment. There's no way back.
In my experience hesitating to answer questions because of the complexity of involved material is a strong indicator of genuine expertise linked with conscientiousness. Careless bullshitters like LLMs don't exhibit this behavior.
I can draw on my past experience of interacting with the person to assign a probability to their answer being correct. Every single person in the world does this in every single human interaction they partake in, usually subconsciously.
I can't do this with an LLM because it does not have identity and may make random mistakes.
LLMs also lack the ability to say "I don't know", which my fellow humans have.