The problem is that it acts as an accountability sink even when it is attached.
I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.
But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.
What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.
I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.
But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.
What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.