To add a little nuance and a bit of a detour from the original topic, some Rationalists (I'm thinking Scott Alexander) tend to spend a lot of brainpower on negative aspects of AI too - think the alignment problem.
The category of events having near infinite positive or negative outcomes with zero to few examples where it's difficult to establish a base-rate[prior] appears to attract them the most. Conversely, an imagined demonic relationship with a yet to be realized unaligned AI results in a particular existential paranoia that permeates other enclaves of Rationalist discourse.
The category of events having near infinite positive or negative outcomes with zero to few examples where it's difficult to establish a base-rate[prior] appears to attract them the most. Conversely, an imagined demonic relationship with a yet to be realized unaligned AI results in a particular existential paranoia that permeates other enclaves of Rationalist discourse.