The document referenced a bunch of court cases that were entirely made up.

Irony Fire

A lawyer in Minnesota who claims to be an expert on how "people use deception with technology," has been accused of using an AI chatbot to draft an affidavit — in support of an anti-deepfake law in the state.

As the Minnesota Reformer reports, lawyers challenging the law on behalf of far-right YouTuber and Republican state representative Mary Franson found that Stanford Social Media Lab founding director Jeff Hancock's affidavit included references to studies that don't appear to exist, a telltale sign of AI text generators that often "hallucinate" facts and reference materials.

While it's far from the first time a lawyer has been accused of making up court cases using AI chatbots like OpenAI's ChatGPT, it's an especially ironic development given the subject matter.

The law, which calls for a ban on the use of deepfakes to influence an election, was challenged in federal court by Franson on the grounds that such a ban would violate First Amendment rights.

But in an attempt to defend the law, Hancock — or possibly one of his staff — appears to have stepped in it, handing the plaintiff's attorneys a golden opportunity.

Law Fare

One study cited in Hancock's affidavit titled "The Influence of Deepfake Videos on Political Attitudes and Behavior" doesn't appear to exist.

"The citation bears the hallmarks of being an artificial intelligence (AI) 'hallucination,' suggesting that at least the citation was generated by a large language model like ChatGPT," Franson's attorneys wrote in a memorandum. "Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question."

And it's not just Franson's lawyers. UCLA law professor Eugene Volokh also discovered a different cited study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which also doesn't appear to exist.

It's a troubling turn in an otherwise meaningful effort to keep AI deepfakes from swaying an election, something that has become a very real risk given steady advancements in the tech.

It also highlights a recurring trend: lawyers keep getting caught using tools like ChatGPT when they bungle up facts. Last year, New York City-based lawyer Steven Schwartz was caught using ChatGPT to help him write up a document.

A different Colorado-based lawyer named Zacharia Crabill, who was also caught red-handed, was fired from his job in November for the same offense.

Crabill, however, dug in his heels.

"There’s no point in being a naysayer," he told the Washington Post of the firing, "or being against something that is invariably going to become the way of the future."

More on AI and lawyers: Lawyer in Huge Trouble After He Used ChatGPT in Court and It Totally Screwed Up


Share This Article