That would be the "Zeroth Law", zeroth because it supersedes all the others. A version of it first showed up in Asimov's short story "The Evitable Conflict":
If you give robots autonomy, they inevitably end up having to make moral decisions. For example, "Should I, an autonomous car, run over the elderly man or the girl with terminal cancer, those being the only two options?"
Asimov's laws (initially suggested by an editor, John W. Campbell) were a first pass at some principles for decision-making. Others have since devised more elaborate ones.
It's all still just words. No two humans could ever fully agree on what constitutes "harm", racist ones would even debate who is "human".
It's like everybody has the public methods what_is_harm() and what_is_a_human(), but how they're implemented matters, not the function signature. The laws of robots are just placeholders for something that never got written.
Words are tricky. What if robots decide that the best way to avoid harm coming to humans is to prevent any being born? So that means you have to rephrase it, into something like maximizing human happiness. Oh no, now they're injecting everyone with drugs and cloning humans by the planetfull. What now? Actual source or it didn't happen.. just creating an empty file called solution_to_the_problem.txt doesn't work, I tried it.
Dogs are also stupid enough to jump into someone and break the person's leg while catching a frisbee, as happened to a friend of mine who had just gone off health insurance.
The intelligence of the dog is limited. I guess it depends on whether we'll let such robots have limited intelligence, too, or we'll give them access to everything, and make them as smart as they can be (much smarter than us).
Or admit that there's no single axis we can use to measure intelligence. The computers will have virtually unlimited memory space that they can access perfectly and huge processing power but will have no idea how to perform basic actions that dogs understand instinctively.
https://en.wikipedia.org/wiki/The_Evitable_Conflict
https://en.wikipedia.org/wiki/Zeroth_Law_of_Robotics#Zeroth_...
If you give robots autonomy, they inevitably end up having to make moral decisions. For example, "Should I, an autonomous car, run over the elderly man or the girl with terminal cancer, those being the only two options?"
Asimov's laws (initially suggested by an editor, John W. Campbell) were a first pass at some principles for decision-making. Others have since devised more elaborate ones.