Paediatricians are somewhat frequently^ the victim of idiot attackers who think they are (based on the prefix of that job) paedophiles.
LLMs giving confidently incorrect information like this is so much worse, takes much less of an idiot to take it at face value, and if so minded to attack the innocent journalist (or anyone similarly associated with the real criminals, as the article points out).
(^I mean, relative to the incidence of paedophilia in the field, or certainly to attacks on other professions based on misguided assumptions; far too frequently, several occurrences in the last 24 years it seems (I was initially just wanting to check details on the one case I dimly recalled), but not like it's happening every week.)
I’ve never heard of a pediatrician being read as a pedophile. Is this somehow specific to the UK spelling or UK culture?
There are cases where I’ve noticed British English speakers use truncated forms where Americans don’t. Like “uni” for university, or “veg” - is it the case that paediatricians/pediatricians are referred to as paedos/pedos and that causes ambiguity? That would be unfortunate.
No it's just barely literate idiot thugs. They're not pronounced or spelt any more similarly than in AmE or anything. It's just 'oh paediatrician that's what paedo is short for innit'.
(there might be some overlap there, I haven't actually read more than necessary to check they're not cases of the paediatrician actually being a paedophile too, there is some of that unfortunately)
Once upon a time I bought a cheap pedometer. After a while I grew curious about how it actually worked. Some internet searching failed to turn up anything. A couple days later, while at work, it suddenly occurred to me that there were probably pedometer nerds and enthusiasts and they probably have web forums that could tell me how it worked.
I'm usually a silent thinking but sometimes when I have a sudden idea I'll say it out loud. This was one of those occasions and I started to say "I need to find some pedophile forums!". Fortunately about halfway through I realized that pedophile is not the correct word for "pedometer enthusiast" and managed to not complete the sentence.
It turns out that the "pedo" in "pedophile" and the "pedo" in "pedometer" are completely unrelated.
In "pedometer" it is not the prefix "pedo" attached to meter. It is "ped" connected to "meter" with "o", similar to the way the "o" is used in "speedometer". The "ped" is a shortening of the Latin "pedis". It came to English via the French pédomètre.
The "pedo" in "pedophile" comes from Greek paidos which means "child".
I have no idea what the actual word for "pedometer enthusiast" is. For that matter I also am not sure what the correct word for a meter that measures children would be.
> LLMs giving confidently incorrect information like this is so much worse
And even in the face of its pervasiveness I see more and more people just accepting their output.
Recently there was a post about using an “LLM” to do sentiment analysis on 600k orange site posts. Without even a modicum of irony.
Both the post and the comments lack any instance of the word “hallucination”, but I’ve seen downvotes being used as a means of silencing a lot here so maybe any dissenting voices were killed off.
Search engines have long used the defence that they're not publishers, they merely guide you to publications by other people. Replacing search with LLM changes that. Who's the publisher when Bing publishes the output of an LLM? I would say that it's Microsoft. And therefore Microsoft are liable when their machine libels people.
Exactly: once you choose to use it for your business, you have ownership of the results. The challenge here will be if companies try to convince judges or lawmakers that there’s something magical about LLMs which mean they shouldn’t be liable for their business decisions.
I think this is going to be a billion dollar legal question, and that you’re right. Companies really want to cut jobs but if precedents like that Air Canada case hold that statements by chatbots are legally binding and they lose the platform defense for misinformation that’s going to really sharply limit adoption for anything public facing.
> Air Canada case hold that statements by chatbots are legally binding
I think that's the only outcome that makes any legal sense. Corporations may be able to buy favourable verdicts, and weak US consumer protection will help them, but EU jurisdictions are not going to let you run a machine that tells lies about people, products or services.
This is interesting: "people in the EU also have the right to access the information stored about them under the GDPR"
I think I am going to have to ask OpenAI about all the weights that are related to me. You should do the same.
It shouldn’t be allowed to respond with false information about a person being a child molester, and be unable to be corrected. If that’s not possible then it shouldn’t be legal.
It’s a bit like proposing to ban cars if someone drives one into a pedestrian isn’t it?
LLMs are computer programs, tools. They’re not sentient, they don’t know what’s true or false. The error is entirely the human who chooses to put more than trivial weight into its output.
Reminds me of the “talking to God” program in TempleOS. We are all Terry now.
OK, fine, we can solve this with the informational equivalent of the Proposition 65 notice: "this content is the result of an LLM. It may contain falsehoods or libels. You are prohibited from relying on this information, and if you republish it you assume liability for any falsehoods".
(remember the Java license clause about not using it for nuclear reactors?)
Maybe we need a Data Protection Act adjustment: before using an LLM, the individual entering the prompt needs to be registered as a data controller, and needs to secure the consent of all individuals whose names appear in the LLM data?
That comparison doesn’t fit the situation: Microsoft is liable for their LLMs just like they’re liable for their corporate vehicles. If their company cars crash into pedestrians they don’t get to say “oh, that just happens sometimes” and shrug it off.
Do you mean Microsoft? Or are you saying there’s some kind of understanding which makes it okay for entering someone’s name into a Bing search box and see false accusations of serious crimes?
LLMs giving confidently incorrect information like this is so much worse, takes much less of an idiot to take it at face value, and if so minded to attack the innocent journalist (or anyone similarly associated with the real criminals, as the article points out).
(^I mean, relative to the incidence of paedophilia in the field, or certainly to attacks on other professions based on misguided assumptions; far too frequently, several occurrences in the last 24 years it seems (I was initially just wanting to check details on the one case I dimly recalled), but not like it's happening every week.)