I don't think a benevolent AI is impossible, or even unlikely, but I do think that as soon as the benevolent AI exists, there are a lot of people who will work very hard to find a way to exploit the tech for military purposes. So to me, the bad scenarios are essentially inevitable, even if the good scenario comes about.
That doesn't mean that the bad scenarios are as bad as people make out, but whether the AI is itself seeking to destroy humanity or just being used by militaristic people to plan simultaneous preemptive wars against everyone they see as a threat, the technology looks very dangerous to me. We will try to build it, that is just the human way, but like with the creation of other dangerous tech, we should be thinking about the dangers and how we will cope with them.
The idea that there are lots of great engineers out there who just need mentoring and guidance to excel sounds great, but the reality of the industry seems to be that people dont stay in one place very long.
I think there is an opportunity there, but you would need to use contracts or something to make sure the time and effort you are putting into making this person great isnt just a gift you are giving to the company they move to next year. Also, I dont think startups are at all well placed to do this, given their time pressures. Google, Apple and Facebook could totally be doing this though.
That doesn't mean that the bad scenarios are as bad as people make out, but whether the AI is itself seeking to destroy humanity or just being used by militaristic people to plan simultaneous preemptive wars against everyone they see as a threat, the technology looks very dangerous to me. We will try to build it, that is just the human way, but like with the creation of other dangerous tech, we should be thinking about the dangers and how we will cope with them.