If their Civic's brakes were poorly designed or implemented, then yes, Honda should be liable. Then we get into the definition of 'poorly' - in what distance and time should the car stop? - and then we need some sophisticated regulation.
The analogue of someone using deep-fakes for fraud is for someone to purposefully hit a pedestrian with their car. Should Honda be held liable because someone tried to use their car as a weapon? The classical form of this argument is if a kitchen knife manufacturer should be held liable if someone used their knives for homicide.
This analogy is strained, because when it comes to motor vehicles, aside from the concept of "street legal" cars that limit what you can do with the vehicle, we also have cops that patrol the streets and cameras that can catch people breaking the rules based on license plate. Theoretically you can't drive around without being registered.
What's the equivalent of that for AI? Should there be a watermark so police can trace an image back to a particular person's software? If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes? At the distribution level? These are hard problems, and I don't think the car analogy really gets us anywhere.
Yes, that's why I offered the kitchen knife example instead. Cars are also a problematic analogy, because even though some people still consider their operation to be fully controlled by the driver and not the manufacturer via their software, that's apparently becoming less the case.
> If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes?
You don't. The problem isn't producing deepfakes. The problem is committing fraud, regardless of the tools used. Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.
> Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.
I agree here. But what about the harder questions? Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?
If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses? At the root level of the software capable of producing them (what seems to be proposed in TFA)? At the user level (car analogy)? At the distribution level (copyright-style)? I don't know the answer to these questions, but I think we should all be talking and thinking about them.
> Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?
Should be handled by impersonation/defamation laws. For celebrities, perhaps it may be handled by copyright. That would allow them to license their likeness under their own conditions.
> If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses?
By enforcing laws against the bad behaviors themselves and not trying to come up with convoluted regulations on tools just because of their potential to be used badly.
> At the root level of the software capable of producing them (what seems to be proposed in TFA)?
> At the distribution level (copyright-style)?
You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.
> not trying to come up with convoluted regulations on tools just because of their potential to be used badly
and
> You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.
These arguments could be made against any "custom" regulatory scheme like what we have for drugs, cars, airplanes, etc. But sometimes the unique harms presented by certain classes of products require unique regulatory schemes.
Maybe you're right (I hope you are) and the potential harms of AI are not really significant enough to warrant any special regulation. But I don't think that is _obviously_ the case, and I would be careful when it comes to talking with normies about this stuff - AI does seem to be really scary, and hearing a techie hand-wave their concerns over technology they don't understand has the potential to make it worse. Good luck out there buddy.
In that case, I think you have a point. However, consider these situations:
* Honda made their car poorly and there are sharp edges at the fenders, and the driver purposely used those to injure someone. I think Honda should still have some liability; their poor construction resulted in extra injury, regardless of the application.
* Honda intentionally or with recklessness built the car in a way that would serve as a useful tool for murder, in ways that served no worthwhile purpose, in ways that could be secured. I don't know the law exactly, but I expect Honda would be liable, and IMHO that would be absolutely right.
Still, if Honda builds a safe car and someone simply chooses to use its mass x acceleration to kill someone, then I wouldn't hold Honda liable.