Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where's this can-do attitude towards housing, healthcare and stronger social safety in general? It sounds like woo to me, more specifically, Roko's basilisk by proxy.

What's doubly duplicitous is that even if LLMs achieve general intelligence, the whole idea is to enrich a tiny fraction of humanity that happens to be shareholders.



AGILMs that have anything like a working internet connection will likely find a way to replace these shareholders sureptitiously --- and without alerting/injuring their caregivers. how you feel about that? depends on your temperament..

EDIT: trying to address the Roko part..I'm assuming once AGI is achieved.. the AGI doesn't need more compute to increase its intelligence beyond those of an average activist employee (I can assure you that in OpenAI there are such employees, and they know to shut up for now)

the antisocial part: it's already happening. What can you do about that.


More likely than not, they'd work as they were designed to: increase profitablity of whatever company that authored them.

As a thought experiment, say you were the CEO/board member of a company that's told your new platform is choosing public benefit over profits. What would you do? Now filter down the decisions going down the heirarchy, considering job security and a general preference for earning bonuses.

For all the discussions around "alignment" - any AI that's not aligned with increased profits will be unplugged posthaste, all other considerations are secondary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: