Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find posts like these difficult to take seriously because they all use Terminator-esque scenarios. It's like watching children being frightened of monsters under the bed. Campy action movies and cash grab sci-fi novels are not a sound basis for forming public policy.

Aside from that, haven't these people realized yet that some sort of magically hyperintelligent AGI will have already read all this drivel and be at least smart enough not to overtly try to re-enact Terminator? They say that societal mental health and well-being is declining rapidly because of social media; _that_ is the sort of subtle threat that bunch ought to be terrified about emerging from a killer AGI.



1. Just because it's popular sci-fi plot doesn't mean it can't happen in reality. 2. hyperintelligent AGI is not magic, there are no physical laws that preclude it from being created 3. Goals of AI and its capacity are orthogonal. That's called "Orthogonality Thesis" in AI safety speak. "smart enough" doesn't mean it won't do those things if those things are its goals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: