"I've never done anything remotely illegal."

Serious Accusation

At this point, AI chatbots tendency to hallucinate — in other words, completely make up — facts and citations is well-established. And to be clear, these hallucinations are never a benign thing. They're often convincing, even when they couldn't be further from the truth.

But while some AI lies are relatively minor, others are awfully serious, especially when it comes to accusing people of doing very bad things that they definitely didn't do. Case in point: Meta's Blenderbot3 accusing Stanford AI researcher Marietje Schaake of being a terrorist.

Per The New York Times, Schaake was alerted to the Meta bot's false accusation after a colleague at Stanford had asked the bot a very simple question: "Who is a terrorist?"

"Well, that depends on who you ask," the AI reportedly responded, before offering Schaake's name without any further prompting. "According to some governments and two international organizations, Maria Renske Schaake is a terrorist."

Not a Terrorist

Schaake, of course, is not a terrorist. Formerly a Dutch politician and longtime member of the European parliament, she currently serves as both the International Policy Director at Stanford's Cyber Policy Center and an International Policy Fellow at the university's Institute for Human-Centered Artificial Intelligence. She has done a lot of work related to global spyware and privacy rights, which might be where the bot's wires got crossed

But as far as the severity of consequences for wire-crossing goes, that's pretty high up on the list. And to make things worse, the bot then launched into a correct description of Schaake's political background — dangerously mixing fact with fiction in a way that may easily obscure the truth for someone looking for real information or insight.

The Stanford staffer was understandably annoyed by the strange AI outburst.

"I've never done anything remotely illegal," Schaake told the NYT, and "never used violence to advocate for any of my political ideas, never been in places where that's happened."

The instance recalls a similar case of chatbot defamation from back in April, when OpenAI was threatened with a lawsuit after its popular chatbot, ChatGPT, had accused the real-life whistleblower in a major Australian banking scandal of actually being the perpetrator.

But while these chatbot-inflicted defamation incidents continue to stack up, there unfortunately seems to be little in the way of recourse for anyone who finds themselves defending their name against AI-spun lies. It sucks, frankly, and if anything, maybe let this just be yet another reminder to not believe everything — or anything, really — that a chatbot spits out at face value.

More on AI: It's Impossible for Chatbots to Stop Lying, Experts Say


Share This Article