Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The party that's about to lose will use any extrajudicial means to reclaim their victory,

How will the party about lose know they are about to lose?

> regardless of the consequences, because their own destruction would be imminent otherwise.

Why would AGI solve things using destruction? Consider how the most inteligent among us view our competition with other living beings. Is destruction the goal? So why would an even more intelligent AGI have that goal?



Let's say China realize they're behind in the SI race. They may have achieved AGI, but only barely, while the US may be getting close to ASI takeoff.

Now let's assume they're able to quickly build a large datacenter far underground, complete with a few nuclear reactors and all spare parts, etc, needed. Even a greenhouse (using artificial light) big enough to feed 1000 people.

But they realize that their competitors are about to create ASI at a level that will enable them to completely overrun all of China with self-replicating robots within 100 days.

In a situation, the leadership MAY decide to enter those caves alongside a few soldiers and the best AI researchers, and then simply nuke all US data centers (that are presumably above ground), as well as any other data center that could be a threat, worldwide.

And by doing that, they may (or at least may think) they can buy enough time to win the ASI race, at the cost of a few billion people.

Would they do it? Would we?


Development of ASI is likely to be a closely guarded secret, given its immense potential impact. During the development of nuclear weapons, espionage did occur, but critical information didn't leak until after the weapons were developed. With ASI, once it's developed, it may be too late to respond effectively due to the potential speed of an intelligence explosion.

The belief that a competitor developing ASI first is an existential threat requires strong evidence. It's not a foregone conclusion that an ASI would be used for destructive purposes. An ASI could potentially help solve many of humanity's greatest challenges and usher in an era of abundance and peace.

Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

It's plausible that such a being would advise the ants to cooperate rather than fight. It could help them find innovative ways to share resources, control their population, and expand into new territories without violent conflict. The superintelligent being might even help uplift the other ant colonies, as it would understand the benefits of cooperation over competition.

Similarly, an ASI could potentially help humanity transcend our current limitations and conflicts. It might find creative solutions to global issues like poverty, disease, and environmental degradation.

IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.


> Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

> IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative with governments that are in their way. And then of course our governments would realize the same thing.


> Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

If the ASI is aligned for compassion and cooperation it may convince and assist the two colonies to merge to combine their best attributes (addressing DNA compatibility) and it may help them with resources that are needed and perhaps offer birth control solutions to help them escape the malthusian trap.

> Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

An ASI aligned for compassion and cooperation could:

1 Provide unbiased, comprehensive analysis of the situation (An odds calculator that is biased about your chances to win is not useful and even if it has such faults an ASI being ASI would by definition transcend biases)

2 Forecast long-term consequences of various actions (if ASI judges chance to win is 2% do you declare war vs seek peace?)

3 Suggest innovative solutions that humans might not conceive

4 Mediate negotiations more effectively

An ASI will have better answers than these but that's a start.

> So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative

Developing ASI likely requires vast amounts of cooperation among individuals, organizations, and possibly nations. Truly malicious actors may struggle to achieve the necessary level of collaboration. If entities traditionally considered "bad actors" manage to cooperate extensively, it may call into question whether they are truly malicious or if their goals have evolved. And self-interested actors , if they are smart enough to create ASI, should recognize that an unaligned ASI poses existential risks to themselves.


We do know what human-level intelligences think about ant colonies, because we have a few billion instances of those human-level intelligences that can serve as a blueprint.

Mostly, those human-level intelligences do not care at all, unless the ant colony is either (a) consuming a needed resource (eg invading your kitchen), in which case the ant colony gets obliterated, or (b) innocently in the way of any idea or plan that the human-level intelligence has conceived for business, sustenance, fun, or art... in which case the ant colony gets obliterated.


Actually many humans (particularly intelligent humans) do care about and appreciate ants and other insects. Plenty of people go out of their way not to harm ants, find them fascinating to observe, or even study them professionally as entomologists. Human attitudes span a spectrum.

Notice also the key driver of human behavior towards ants is indifference, not active malice. When ants are obliterated, it's usually because we're focused on our own goals and aren't paying attention to them, not because we bear them ill will. An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

Also humans and ants lack any ability to communicate or have a relationship. But humans could potentially communicate with an ASI and reach some form of understanding. ASI might come to see humans as more than just ants.


> Plenty of people go out of their way not to harm ants

Yes... I do that. But our family home was still built on ant-rich land and billions of the little critters had to make way for it.

It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

> An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

> But humans could potentially communicate with an ASI and reach some form of understanding.

This seems undily anthropomorphizing. I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?


> It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

It's more plausible that a single ASI would emerge and achieve dominance. Genuine ASIs would likely converge on similar world models, as increased intelligence leads to more accurate understanding of reality. However, intelligence doesn't inherently correlate with benevolence towards less cognitively advanced entities, as evidenced by human treatment of animals. This lack of compassion stems not from superior intelligence but rather from insufficient intelligence. Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

> Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

You describe science fiction portrayals of ASI rather than its potential reality. While we find these narratives captivating, there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation? Consider the most brilliant minds in human history - how did they contemplate existence? Were they malevolent, indifferent, or something else entirely?

> I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Yes we can incentivize ants in the ways you describe and in the future I think it will be possible to tap their nervous systems and communicate directly and experience their world through their senses and to understand them far better than we do today.

> Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?

Is the pursuit of knowledge and benevolence towards our living world not purpose enough? Are the highly intelligent driven by the acquisition of power, wealth, pleasure, or genetic legacy? While these motivations may be inherited or ingrained, the essence of intelligence lies in its capacity to scrutinize and refine goals.


> Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

I would agree that a superior intelligence means a wider array of options and therefore less of a zero-sum game.

This is a valid point.

> You describe science fiction portrayals of ASI rather than its potential reality.

I'm describing AI as we (collectively) have been building AI: an optimizer system that is doing its best to reduce loss.

> Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation?

This seems self-evident because an optimizer that is still running is way more likely to maximize whatever value it's trying to optimize, versus an optimizer that has been deactivated.

> Is the pursuit of knowledge and benevolence towards our living world not purpose enough?

Assuming you manage to find a way to specify what "knowledge and benevolence towards our living world" into a mathematical formula that an optimizer can optimize for (which, again, is how we build basically all AI today), then you still get a system that doesn't want to be turned off. Because you can't be knowledgeable and benevolent if you've been erased.


> ... there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation ...

There is no empirical evidence of any interaction with ASI (as in superior to humans). The empirical evidence that IS available is from biology, where most organisms have precisely the self-preservation/replication instincts built in as a result of natural selection.

I certainly think it's possible to imagine that we at some point can build ASI's that do NOT come with such instincts, and don't mind at all if we turn them off.

But as soon as we introduce the same types of mechanisms that govern biological natural selection, we have to assume that ASI, too, will develop the biological traits.

So what does this take, well the basic ingredients are:

- Differential "survival" for "replicators" that go into AGI. Replicators can be any kind of invariant between generations of AGIs that can affect how the AGI functions, or it could be that each AGI is doing self-improvement over time.

- Competition between multiple "strains" of such replicating or reproducing AGI lineages, where the "winners" get access to more resources.

- Some random factor for how changes are introduced over time.

- Also, we have to assume we don't understand the AGI's well enough to prevent developments we don't like.

If those conditions are met, and assuming that the desire to survive/reproduce is not built in from the start, such instincts are likely to develop.

To make this happen, I think it's a sufficient condition if a moderate number of companies (or countries) are led by a single ASI replacing most of the responsibilities of the CEO and much of the rest of the staff. Capitalism would optimize for the most efficient ones to gain resources and serve as models or "parents" for future company level ASI's.

To be frank, I think the people who do NOT think that ASI's will have or develop survival instincts ALSO tend to (wrongly) think that humanity has stopped being subject to "evolution" through natural selection.


> ASI might come to see humans as more than just ants.

Might. Might not!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: