> And then it can very well kill a home through misdiagnosis.
I would imagine outcomes would be scrutinized heavily for an application like this. There is a difference between a margin of error (existing with human doctors as well) and a sentient ai that has decided to kill, which is what it sounds like you're describing.
If we didn't give it that goal, how does it obtain it otherwise?