Hacker Newsnew | past | comments | ask | show | jobs | submit | obi1one's commentslogin

I don't think a benevolent AI is impossible, or even unlikely, but I do think that as soon as the benevolent AI exists, there are a lot of people who will work very hard to find a way to exploit the tech for military purposes. So to me, the bad scenarios are essentially inevitable, even if the good scenario comes about.

That doesn't mean that the bad scenarios are as bad as people make out, but whether the AI is itself seeking to destroy humanity or just being used by militaristic people to plan simultaneous preemptive wars against everyone they see as a threat, the technology looks very dangerous to me. We will try to build it, that is just the human way, but like with the creation of other dangerous tech, we should be thinking about the dangers and how we will cope with them.


The idea that a hostile Super-AI will take over and/or be a Doomsday scenarios is equally plausible the reverse will be equally irreversible.

A benevolent AI would immediately prioritize:

A) Eliminating hostile AI research

B) Advancing the good of humanity on the metrics it was given.


I was in contact with Jared ~1.5 years ago based on a post on HN. It didnt work out, but he certainly got back to me quickly at every stage.


The idea that there are lots of great engineers out there who just need mentoring and guidance to excel sounds great, but the reality of the industry seems to be that people dont stay in one place very long.

I think there is an opportunity there, but you would need to use contracts or something to make sure the time and effort you are putting into making this person great isnt just a gift you are giving to the company they move to next year. Also, I dont think startups are at all well placed to do this, given their time pressures. Google, Apple and Facebook could totally be doing this though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: