Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

I would agree that a superior intelligence means a wider array of options and therefore less of a zero-sum game.

This is a valid point.

> You describe science fiction portrayals of ASI rather than its potential reality.

I'm describing AI as we (collectively) have been building AI: an optimizer system that is doing its best to reduce loss.

> Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation?

This seems self-evident because an optimizer that is still running is way more likely to maximize whatever value it's trying to optimize, versus an optimizer that has been deactivated.

> Is the pursuit of knowledge and benevolence towards our living world not purpose enough?

Assuming you manage to find a way to specify what "knowledge and benevolence towards our living world" into a mathematical formula that an optimizer can optimize for (which, again, is how we build basically all AI today), then you still get a system that doesn't want to be turned off. Because you can't be knowledgeable and benevolent if you've been erased.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: