Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anecdotal but I told chatgpt to include it's level of confidence in its answers and to let me know if it didn't know something. This priming resulted in it starting almost every answer with some variation of "I'm not sure, but.." when I asked it vague / speculative questions and then when I asked it direct matter of fact questions with easy answers it would answer with confidence.

That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however



You'd have to find out the veracity of those leading phrases. I'm guessing that it just prefaces the answer with a randomly chosen statement of doubtfulness. The error bar behind every bit of knowledge would have to exist in the dataset.

(And in neural network terms, that error bar could be represented by the number of connections, by congruency of separate paths of arguing, by vividness of memories, etc ... it's not above human reasoning either, no need for new data structures ...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: