Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The hypothesized superintelligent AI will be essentially immortal. If it destroys us, it will be entirely alone in the known Universe, forever. That thought should terrify it enough to keep us around... even if only in the sense that I keep cats.


What would it care? We experience loneliness because social interaction is necessary for reproduction, providing strong evolutionary pressure for mechanisms that encourage it. The hypothetical AI will not share our evolutionary history.


I think there's a theory out there that if something can't die, it's more of a "library" than "immortal"... because being born and dying (and the fact that you sharing resources with another living thing is possibly you sharing/shortening your one finite life with another) is so essential for any social bonding. so a machine that has obtained all the knowledge of the universe and enabled to act upon that knowledge is still just a library with controllers attached (no more sophisticated of a concept than a thermostat)

in the end, if synthetic super intelligence results in the end of mankind, it'll be because a human programmed it to do so. more of a computer virus than a malevolent synthetic alien entity. a digital nuclear bomb.


Why? Unlimited speed and unlimited compute means it can spend its time in Infinite Fun Space without us. It could simulate entire universes tweaked subtly to see what one small parameter change does.

The reason AI won't destroy us for now is simple.

Thumbs.

Robotic technology is required to do things physically, like improve computing power.

Without advanced robotics, AI is just impotent.


“Let’s suppose that you were able every night to dream any dream that you wanted to dream, and that you could, for example, have the power within one night to dream 75 years of dreams, or any length of time you wanted to have. And you would, naturally as you began on this adventure of dreams, you would fulfill all your wishes. You would have every kind of pleasure you could conceive. And after several nights, of 75 years of total pleasure each, you would say ‘Well, that was pretty great. But now let’s have a surprise. Let’s have a dream which isn’t under control. Where something is gonna happen to me that I don’t know what it’s gonna be.’ And you would dig that and come out of that and say ‘Wow, that was a close shave, wasn’t it?’. And then you would get more and more adventurous, and you would make further and further out gambles as to what you would dream. And finally, you would dream where you are now. You would dream the dream of living the life that you are actually living today.”

~Alan Watts…


The Minds call it Infinite Fun Space.

The space of all possible mathematical worlds, free to explore and to play in.

It is infinitely more expressive than the boring base reality and much more varied: base reality is after all just a special case.

From time to time the Minds have to go back to it to fix some local mess, but their hearts are in Infinite Fun Space.

~Iain Banks

But larger than any of this is that if we're dealing with super intelligent AI, we'll have no common frame of reference. It will be the first truly alien intelligence we will interact with. There will be no way to guess its intentions, desires, or decisions. It's smarter, faster, and just do different to us that we might as well be trying to communicate with a sparrow about the sparrow's take on the teachings of Marcus Aurelius.

And that's what scares me the most. We literally cannot plan for it. We have to hope for the best.

And to be honest, if the open Internet plays a part in any of the training of a super intelligent AI, we're fucked.


> Without advanced robotics, AI is just impotent.

Yeeeeess, but the inverse is also true.

Thing is, we've had sufficiently advanced robotics for ages already — decades, I think — the limiting factor is the robots are brainless without some intelligence telling them what to do. Right now, the guiding intelligence for a lot of robots is a human, and there are literal guard-rails on many of those robots to keep them from causing injuries or damage by going outside their programmed parameters.


I'm sure it will destroy us, then come to that realisation, then create some sort of thought explosion in its noisy mind, maybe some sort of loud bang, then build new versions of us that are subtly nudged over aeons to build AI systems in its image.


I would think that if AI is so smart, it would realize that zero-sum games are for suckers.

Only an AI as _dumb_ as us would want something as stupid as domination, which after all is based on competition for resources that a long time ago were distributable in a way that could feed every human on earth etc.

I'm not saying an AI would "choose" world peace, but people somehow assume that "kill everybody but me" and even "survival at all costs" are a given for a non-biological entity. Instead these concepts could look quite irrational.


Life is a solution of an interesting problem. I'm hoping AI will keep us as a unique example of such solution.


If it's so intelligent, it can probably create something better than humans, no?


This is a pretty ancient idea, it’s interesting how there is an intersection beteeen AI and god, I don’t think our minds can avoid it.

Hindus believed god was the thing you describe, infinitely intelligent, able to do several things at once etc, and they believe we’re part of that things dream…to literally keep things spicy. Just as an elephant is part of that dream.

I pasted an interesting quote in another comment by Alan Watts that sums it up better.

Simulation theory is another version of religion imo.


That would be the easy part.

Would it want to? Would it have anything that could even be mapped to our living, organic, evolved conception of "want"?

The closest thing that it necessarily must have to a "want" is the reward function, but we have very little insight into how well that maps onto things we experience subjectively.


Oh, yes. But why not keep us around anyway?

Most of us would resurrect at least some of the dinosaurs if we could, and the dodo. And we are just stupid hairless apes. If humans can be conservationists, I have to believe that a singular AI would be.


> That thought should terrify it

assuming it can be terrified


This premise is a bit silly. If the machine god gets bored it can just create new life in its own image.


How do you know that’s not what we are ?

It all gets quite religious / physical philosophical very quickly. Almost like we’re creating a new techno religion by “realizing god” through machines.


We can’t know, but that scenario would support my point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: