Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article has too many holes to count and reads more like someone in denial.

---

BUT, as an aside: I think a decent argument exists for the non-certainty of intelligence explosion.

The argument goes like this: it takes an intelligence of level X to engineer an intelligence of level X+1.

First, it may well be that humans are not an intelligence of level X, and reach our limit before we engineer an intelligence superior to our own.

Furthermore, even if we do, it may also be that it takes an intelligence of level X+2 to engineer an intelligence of level X+2 (Etc. for some intelligence level X+n.), in which case we at most end up with an AI only somewhat superior to ourselves, but no God-like singularity (for example, we end up with Data from Star Trek TNG, who in season 3, episode 16 fails to engineer an offspring superior to himself -- sure, Data is far superior to his human peers in some aspects, but not crushingly so).



I think everyone agrees about "non-certainty". Where people disagree is on how likely an intelligence explosion is; and in particular, whether it is likely enough to warrant expending effort to plan for it.


We don't know enough to know whether it's possible. If it is, we don't know enough to know what approach to follow to get there.

Is it worth spending effort to plan for it? Maybe some. But if we don't know what approach to follow to get there, we don't know what it's capabilities and limitations will be. That means we don't know what we have to plan for. Any planning will therefore be either very speculative or very abstract.

I wouldn't start pouring effort into planning for it, as if were the most important problem in the world...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: