What a mess.

Right to Bear AI

A radio host from Georgia named Mark Walters is suing OpenAI after its ChatGPT service told a journalist that he was embezzling funds from The Second Amendment Foundation (SAF), a gun rights nonprofit.

Walters filed the likely first-of-its-kind libel lawsuit earlier this week, Gizmodo reports, alleging that ChatGPT damaged his reputation by making the claims.

While lawyers will likely encounter an uphill battle proving that an AI chatbot harmed Walters' reputation in court, the lawsuit could nevertheless steer the conversation as these tools continue to hallucinate claims with aplomb.

Walters vs. ChatGPT

In the suit, Walters' lawyer alleges that OpenAI's chatbot "published libelous material regarding Walters" when Fred Riehl, the editor-in-chief of a gun website, asked it for a summary of a case involving Washington attorney general Bob Ferguson and the SAF.

ChatGPT implicated Walters in the case, and even identified him as the SAF's treasurer and chief financial officer — which he isn't. In fact, Walters isn't involved with the SAF at all and the case Riehl was researching makes no mention of Walters' name.

The AI tool even doubled down, generating entire passages of the suit — which isn't even about shady financial accounting — that were entirely made up. As Gizmodo points out, the AI even messed up the case number.

Importantly, Riehl never even published false information spat out by the AI, but later contacted attorneys involved in the suit in question.

ChatGPT and other AI chatbots like it have a well-recorded track record of coming up with entirely made-up falsehoods, a flaw that deeply undermines their usefulness.

Despite these flaws, companies including OpenAI and Google have pushed them as a new way to retrieve information — while, oddly, constantly warning that their output shouldn't be trusted.

Causing Harm

Walters' attorney John Monroe is now arguing we should still hold these companies accountable for those flawed outputs.

"While research and development in AI is a worthwhile endeavor, it is irresponsible to unleash a system on the public knowing that it fabricates information that can cause harm," Monroe told Gizmodo.

But could fabricated information spat out by the likes of ChatGPT ever be deemed as libel in court?

Eugene Volokh, a University of California Los Angeles Law School professor who is authoring a journal article on the legal liability of AI models, told Gizmodo that it's not out of the question.

"OpenAI acknowledges there may be mistakes but [ChatGPT] is not billed as a joke; it’s not billed as fiction; it’s not billed as monkeys typing on a typewriter," he told the news outlet.

More on ChatGPT: AI Plagiarism Detection Software Keeps Falsely Accusing Students of Cheating


Share This Article