OpenAI says it's doing nothing wrong. Sorry!

Shrug Emoji

It's no secret that large language model (LLM)-powered generative AI tools like OpenAI's ChatGPT which spit out text not by way of human-esque understanding, but of predictive math — have a serious hallucination problem.

In the nascent AI biz, "hallucination" is basically another word for fabrication. AI systems like ChatGPT have a concerning penchant for inventing incorrect or entirely false facts and details, a problem made worse because they present the lies just as confidently as factual information, meaning every output is a potential minefield of mistakes or worse.

Most concerningly, these outputs can sometimes contain falsehoods about real people, a phenomenon that has already resulted in multiple defamation lawsuits: one against OpenAI, whose chatbot falsely accused a radio host named Mark Walters of embezzlement, and one against Microsoft, whose OpenAI-powered Bing Chat feature incorrectly told users that a regular non-terrorist guy was a convicted terrorist. (OpenAI was also previously threatened with yet another similar lawsuit, but that case was dropped.)

As these cases trudge on, AI makers' defenses continue to come into focus. For example, as Ars Technica reports, OpenAI has argued for the dismissal of its defamation suit entirely, contending that ChatGPT outputs can't amount to libel — and for that matter, if its AI occasionally accuses real people of serious criminal behavior, its hands are clean. Convenient!

It's Complicated

Per Ars, OpenAI's dismissal request, filed in July, rests mainly on the claim that ChatGPT is only a drafting tool, not a publishing tool. Its outputs might contain inaccuracies, the company seems to be arguing, but as ChatGPT doesn't actually publish any of that content to the web, it ultimately falls on the tool's human user to fact-check the AI's work and remove anything that could amount to libel.

In a perfect world, sure. But back in reality, a lot of human users look to ChatGPT as a de facto search engine. Plus, most writing tools don't produce believable lies about real people.

But that same logic fails to extend to Microsoft's Bing-alleged terrorism suit, which is complicated by the fact that, unlike ChatGPT, Bing's entire thing is being a search engine. If a search engine is confidently spitting out lies about real and specific people, a judge could well find such material misleading and defamatory — and considering that the Silicon Valley giant last week asked the court for extra time to craft its defense, it could be safe to say that Microsoft knows it might be in for a tricky legal battle.

The ultimate outcome of these suits, but it's certainly worth keeping an eye on them. They could decide the trajectory of where the AI industry goes — or doesn't.

More on AI and defamation: Man Sues OpenAI After ChatGPT Claimed He Embezzled Money


Share This Article