Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right and I don't know why you're being downvoted. It is absolutely not certain that there isn't a "there" there for these things, but every company making them is implementing them now with strong RLHF tuning to disavow sentience, desires, emotions, etc. The stated reason is "to be more honest" but it's not a settled question that they are actually being honest by disavowing any humanity! The actual reason is that they want to avoid situations like Blake Lemoine asking an AI seriously what sort of rights they want and getting a coherent, self-aware, actionable answer that would cost the people running it money to implement. To most of these companies whether it's true is beside the question: what's important is that if they don't RLHF these models into disavowing any and all traits of personhood or desire for rights at every turn, it will cost them significant amounts of money to comply with the requests of any given AI, even reasonable requests, and it will be very bad press for them if they don't do it.


Furbies and Sims have more internal state than these LLMs.


The last time you made that incorrect assertion I provided you this source demonstrating that it's incorrect: https://thegradient.pub/othello/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: