Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

there should be a moral impetus to always have a human in the loop regardless

I don’t understand how to come to this. War is crap, not a dinner party. There’s always a human on both sides who will drop a bomb and laugh on camera, with no responsibility. Go watch it (actually don’t, it’s NSFL). Reading this thread feels like everyone watched and believed in that movie where they tried to select and eliminate a target for 2 hours with futuristic hi-tech. A human hesitates to press the button before the war. When in it, he will only be concerned with things like ammunition saving and tactical nuances. There’s not much more morals in a human who usually sits there at the button than in AI automation.



The thing that is different is now that human has an excuse: "The computer told me to put them in the oven."


There's an old IBM presentation going around, from 1979, that says "A computer can never be held accountable, therefore a computer must never make a Management Decision." We know that humans make monstrous decisions in war; many of us remember seeing the Collateral Murder video, and everyone has at least heard of the Nuremberg trials. When humans make monstrous decisions, at least some of them, sometimes, hang for it. The computer here serves mainly to diffuse responsibility for decisions that would be made in any case. Who will hang?


It's a good question. I also immediately found myself asking the same one of myself after posting that comment. I guess part of me just wants as many possible breakpoints along the process as possible.

But also at least then you have someone who is liable when things go wrong. When its fully automated, like the other comment mentions, they can just shrug and blame the AI. Who gets sued when a self driving car kills someone by accident? I don't know. Perhaps a lack of ownership is excusable. But when a weapon deliberately kills someone I think we need to have ownership somewhere.

Perhaps as a general rule the maker of the AI system should have liability for the AI up until someone else signs and accepts that responsibility. None of this "Company does not accept liability" crap. They have to make it clear that "customer accepts liability" or else it's them. That way they will be incentivized to make the military or whoever sign.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: