Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you think humans would make better life or death decisions? Have we never had innocent civilians killed overseas by US military as a result of human error?


The problem with these things is that they allow humans to pretend that they are not responsible for those decisions, because "computer told me to do so". At the same time, the humans who are training those systems can also pretend to not be responsible because they are just making a thing that provides "suggestions" to humans making the ultimate decision.

Again, look at what's happening in Gaza right now for a good example of how this all is different from before: https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...


With self-driving cars some human will be held responsible in case of the accident, I hope. Why would it be different here? It seems like a responsibility problem, not a technology one.


I'm not talking about matter of formal responsibility here, especially since the enforcing mechanisms for stuff like war crimes are very poor due to the lack of a single global authority capable of enforcing them (see the ongoing ICC saga). It's about whether people feel personally responsible. AI provides a way to diffuse and redirect this moral responsibility that might otherwise deter them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: