Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

That's absurd at worst, science fiction at best, akin to worrying about manned flight safety in the 1500's.



Are you really trying to deny that google cars and other automated systems at least partially based on AI have safety issues? Even if we're talking autonomous, "life-like" AI, there is a long list of interesting philosophical and legal questions to be asked. I can't say I find any of the statements here or in the article very appealing, but you shouldn't dismiss real safety/security issues just because you don't like the guy.


Are you really trying to assert that MIRI is addressing systems on the level of Google cars, in any serious technical manner? If so, can you point to examples?


No, I'm saying that AI has wider applications, and I was responding to the manned flight safety example. Also, I'm arguing that we shouldn't dismiss the guy's arguments just because he's an ass. Especially with regards to this article, we really don't need to resort to a straw man to refute what he wrote.


AI in the sense implied does not exist. Otherwise "would pose" would be "poses" in the sentence I quote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: