Double Down

Lawyer Gets Caught Using AI in Court, Responds in the Worst Possible Way

Not making a great case for yourself, pal.
Frank Landymore Avatar
Even after being caught, the embattled lawyer, at first, refused to admit that he used AI in his court documents.
Getty / Futurism

What is it with lawyers and AI? We don’t know, but it feels like an inordinate number of them keep screwing up with AI tools, apparently never learning from their colleagues who get publicly crucified for making the same mistake.

But this latest blunder from a New York attorney, in a lawsuit centered on a disputed loan, takes the cake. As 404 Media reports, after getting caught using AI by leaving in hallucinated quotes and citations in his court filings, defense lawyer Michael Fourte then submitted a brief explaining his AI usage — which was also written with a large language model.

Needless to say, the judge was not amused.

“In other words, counsel relied upon unvetted AI — in his telling, via inadequately supervised colleagues — to defend his use of unvetted AI,” wrote New York Supreme Court judge Joel Cohen in a decision filed earlier this month.

“This case adds yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession,” the judge further lamented.

Perhaps one of the reasons that we keep hearing about these completely avoidable catastrophes is that catching your opponent even making a single mistake using an AI tool is an easy way to gain an upper hand in court, so everyone’s on the lookout for them.

That’s what happened here: it was the plaintiff’s legal team that first caught the mistakes, which included inaccurate or completely made up citations and quotations. The plaintiffs then filed a request for the judge to sanction Fourte, which is when he committed the legal equivalent of shoving a stick between the spokes of your bike wheel: he used AI again.

In his opposition to the sanctions motion, Fourte’s submitted document contained more than double the amount of made-up or erroneous citations as last time, an astonished-sounding Cohen wrote.

His explanation was also pretty unsatisfactory. Fourte neither admitted nor denied the use of AI, wrote judge Cohen, but instead tried to pass off the botched citations as merely “innocuous paraphrases of accurate legal principles.”

Somehow, it gets worse. After the plaintiffs flagged the new wave of errors in Fourte’s opposition to the sanctions motion, the defense lawyer — who by now was presumably sweating more than a character in a Spaghetti Western — then strongly implied that AI wasn’t used at all, complaining that the plaintiffs provided “no affidavit, forensic analysis, or admission” confirming the use of the tech. When he had an opportunity to set the record straight during oral arguments in court, Fourte further insisted that the the “cases are not fabricated at all,” the judge noted.

Eventually, though, he cracked. After getting further grilled on how a completely made-up court case ended up in his filings, Fourte admitted he “did use AI.” He also, in practically the same breath, said he took “full responsibility” for the AI-generated nonsense — but also tried to pass off some of the blame on some additional staff he’d brought on the case. Classic.

Later, Fourte “muddled those statements of contrition,” the judge mused, by saying, “I never said I didn’t use AI. I said that I didn’t use unvetted AI.”

To which the judge called BS. “If you are including citations that don’t exist, there’s only one explanation for that. It’s that AI gave you cites and you didn’t check them,” Cohen responded to Fourte’s pleas. “That’s the definition of unvetted AI.”

After all the back and forth, Judge Cohen granted the plaintiff’s motion for sanctions.

Fourte declined 404’s request for comment. “As this matter remains before the Court, and out of respect for the process and client confidentiality, we will not comment on case specifics,” he told the outlet. “We have addressed the issue directly with the Court and implemented enhanced verification and supervision protocols. We have no further comment at this time.”

While his case seems especially egregious, Fourte is definitely not alone. Dozens of other lawyers have been caught using AI for largely the same reason: submitting erroneous or made up case law. Some used public chatbots like ChatGPT, but others used AI tools special-built for law, illustrating how fundamentally error-prone the tech remains.

One of the biggest firms caught in an AI scandal is Morgan & Morgan, which rushed out a panicked company-wide email after two of its lawyers faced sanctions for citing AI hallucinations earlier this year. Judges, meanwhile, have done their darndest to make an example out of lawyers careless enough to rely on the word of an LLM — but clearly, this latest case shows, not everyone’s getting the memo.

More on AI: Top US Army General Says He’s Letting ChatGPT Make Decisions to Make Military Decisions

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.