The AI giant won't get off the hook that easily.

See You In Court

OpenAI has been trying to get a libel lawsuit targeting ChatGPT thrown out of court. But a Georgia judge has now ruled against the AI company, allowing the defamation suit to proceed in what could be a decisive case regarding OpenAI's liability over what its chatbot says, Ars Technica reports.

The suit was filed last June by gun rights activist Mark Walters, who alleges that ChatGPT falsely accused him of embezzlement.

In November, OpenAI fired back by arguing that no defamation occurred because the chatbot is not a publication, adding that there was "no actual malice, no listener who believed the alleged defamatory content, and thus no harm to any reputation," as quoted by MediaPost.

These arguments apparently weren't convincing to the judge, however, who filed their order last week.

"We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case," John Monroe, Walter's attorney, told Ars.

Far Out Claims

According to the suit, Walter was alerted to the alleged defamation by a journalist, Fred Riehl, who used the chatbot to summarize a completely unrelated complaint filed by the Second Amendment Foundation (SAF), a nonprofit that Walters was associated with.

Totally off the mark, ChatGPT instead invented an entirely fabricated lawsuit — complete with a complaint and a fake case number, notes Ars — accusing Walters of embezzling funds from the SAF.

It's a glaring example of the "hallucinations" that plague large language models, in which they conjure up incorrect information and present it as fact. This form of misinformation is unintentional, but there's no denying that the tech can be used to deliberately forge disinformation, too.

On the Case

Ars notes several legal arguments that OpenAI has fielded, which could shed some light on why the judge denied its motion to dismiss.

Perhaps most important is the startup's claim that Walters is a public figure, in which case he would have to prove there was "actual malice" involved. However, Monroe argued in an November court filing that OpenAI has yet to prove that Walters is in fact such a figure.

OpenAI also fell back on a disclaimer that warns that its bot's responses may be inaccurate and should be verified. The onus, therefore, would fall on the person who prompted the response, in this case the journalist Riehl (who didn't publish the chatbot's troubling output).

But, as Monroe argues, a "disclaimer does not make an otherwise libelous statement non-libelous," as quoted by Ars.

These are strong rebuttals — and they'll need to be. Defamation is notoriously difficult to prove in the US, especially for public figures. Still, that the judge has allowed this case to proceed is encouraging news for Walters. By contrast, it's an extremely worrying sign for OpenAI, which already has its fair share of lawsuits against it, as well as an ongoing FTC investigation into whether its chatbot made harmful statements about users.

More on OpenAI: OpenAI Axes Ban on Military Contracts, Reveals Deal With Pentagon


Share This Article