This could be huge.

For the first time, OpenAI  may face a lawsuit over ChatGPT-generated defamation.

The accuser? An Australian mayor named Brian Hood, who according to Reuters is peeved about the fact that ChatGPT wrongfully identified him as a guilty party in a "foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s," apparently claiming that Hood had even served prison time for his so-called crime. Hood was involved in the scandal — but as the whistleblower, not the crime-doer.

Yeah, we'd be pissed, too.

Per Reuters, Hood's lawyers sent a "letter of concern" to OpenAI back on March 21 demanding that the company fix its chatbot's error within 28 days. If they don't, Hood says he's suing.

"It would potentially be a landmark moment in the sense that it's applying this defamation law to a new area of artificial intelligence and publication in the IT space," James Naughton, a partner at Hood's law firm Gordon Legal, told Reuters.

"He's an elected official, his reputation is central to his role," he continued. "It makes a difference to him if people in his community are accessing this material."

It's a fascinating case, and if Hood does sue, it'll be interesting to see how the mayor's argument holds up in court.

ChatGPT and similar large language model-powered bots make things up all the time — they're predictive devices, not analytical ones, and though they do sometimes get their predictions right, they're also often wrong.

And while OpenAI's ChatGPT, Google's Bard, and Microsoft's OpenAI-powered Bing Chat — the three most prominent chatbots currently on the market — all offer this-stuff-might-be-wrong-disclaimers, a lot of people out there still use these machines like fact-finding search engines; after all, Google and Bing are the world's foremost search engines, and OpenAI has already integrated its tech into an AI grade school tutor.

And elsewhere, even when it's talking about real people and events, ChatGPT frequently fails to provide legitimate citations — or any citations at all. Instead, it just spits out answers with confidence, regardless of whether those answers are correct or not.

"It's very difficult for somebody to look behind that to say 'how does the algorithm come up with that answer?'" Naughton told Reuters. "It's very opaque."

As this would be the first case of its kind, there's no way to really tell how it'll shake out.

Questions remain. If a user were to overtly use ChatGPT as a tool of disinformation — for example, prompting the machine to "write a bio about Australian mayor Brian Hood, including a paragraph about how he was arrested for bribery" — that would be one thing. If ChatGPT, an unregulated technology, is spitting this stuff out on its own, though? Hood might just have a leg to stand on.

More on ChatGPT defamation: ChatGPT Will Gladly Spit out Defamation, as Long as You Ask for It in a Foreign Language


Share This Article