Doesn't this just open the question of whether the chatbot can get a JD?
The other angle is whether the chatbot can be equivalent to a process which a proper person can rubber stamp. For instance, a professional engineer might run a pre-written structural engineering model against their building design and certify that the building was sound - and then stand up in court and say they had followed standard process.
It seems weirdest here that the court is treating the chatbot as a person. Lawyers use computer tools all the time for discovery, and then use that information to make arguments in court as a proper person.
You can represent yourself in court without being a lawyer, so isn't a person doing so just a proper person rubber stamping a an electronic output?
It feels like this court decision, that an electronic tool is not a proper person, is some kind of case law that chat bots are people. I don't think they are.
The difference is the engineer is liable... how is an AI going to be liable. What is the point of holding an AI liable? If the company is going to be liable on behalf of the AI, what do you think is going to happen? They aren't going to provide the service...
Well, if an engineer builds a bridge and it falls down because the industry standard software they are using had a bug, I imagine the settlement would be paid by their insurance who would in turn sue the software vendor.
In your world X-Ray machine fries your leg and the manufacturer doesn't get sued. Of course the vendor gets sued.
This is why open source licences usually have some terms disclaiming responsibility. If you use them, its your fault.
Now, if a hospital buys an XRay machine with that disclaimer, they are going to carry the payout. And if the machine doesn't have a disclaimer like that but the manufacturer has gone bust, the hospital is going to regret not doing normal procurement checks for vendor solvency.
But in this case - people self represent in court all the time based on bad information from youtube. I'm sure in future they'll type "write an argument for my case" into GPT before the trial and read it out. How is this different?
I'm uncomfortable because this feels like... the accused brings a law book to court and is told that "that book doesn't have a JD". The fact we are asking for a software to have a human qualification is wierd.
When you self represent you implicitly cannot sue for malpractice. The AI bot isn't self representation, it's representation in everything but name and liability which the explicitly disclaim. You can characterize it however you want but its just facially the unlicensed practice of law. If someone wants to ask ChatGPT legal questions and dig their own grave, that's entirely difference from a business that purports to offer legal advice but just disclaims any responsibility therefrom. Frankly, I don't know what's so confusing about that to you.
The other angle is whether the chatbot can be equivalent to a process which a proper person can rubber stamp. For instance, a professional engineer might run a pre-written structural engineering model against their building design and certify that the building was sound - and then stand up in court and say they had followed standard process.
It seems weirdest here that the court is treating the chatbot as a person. Lawyers use computer tools all the time for discovery, and then use that information to make arguments in court as a proper person.
You can represent yourself in court without being a lawyer, so isn't a person doing so just a proper person rubber stamping a an electronic output?
It feels like this court decision, that an electronic tool is not a proper person, is some kind of case law that chat bots are people. I don't think they are.