This is getting out of hand.

Fool Me Twice

Yet another team of lawyers was found leaving AI slop in court documents. It's the latest example of white-collar professionals outsourcing their work to confidently wrong AI tools — and this time, it's not just about any old frivolous lawsuit.

As The Guardian reports, a pair of Australian lawyers named Rishi Nathwani and Amelia Beech, who are representing a 16-year-old defendant in a murder case, were caught using AI after documents they submitted to prosecutors proved to be riddled with a series of bizarre errors, including made-up citations and a misquoted parliamentary speech.

The hallucinations caused a series of mishaps, highlighting how even just one AI hallucination in this setting can have a Domino-like effect.

Per The Guardian, the prosecution didn't double-check the accuracy of the defense's references, which caused them to draw up arguments based on AI-fabricated misinformation. It was the judge who finally noticed that something was amiss, and when the defense was confronted about the wild array of mistakes in court, they admitted to using generative AI to whip up the documents.

Worse yet, that wasn't even the end of the defense's inadmissible behavior. As The Guardian explains, the defense re-submitted purportedly revised documents — only for those documents to include more AI-generated errors, including completely nonexistent laws.

"It is not acceptable for AI to be used unless the product of that use is independently and thoroughly verified," justice James Elliott told Melbourne's Supreme Court, as quoted by the newspaper, adding that "the manner in which these events have unfolded is unsatisfactory."

Unacceptable

The stakes are incredibly high in this case. Nathwania and Beech are defending a minor accused of murdering a 41-year-old woman while attempting to steal her car (per the newspaper, the teen was ultimately found not guilty of murder on grounds that he was cognitively impaired at the time of the killing).

Elliott expressed concern that the "use of AI without careful oversight of counsel would seriously undermine this court's ability to deliver justice," according to the Guardian, as AI-generated misinformation could stand to "mislead" the legal system.

The incident is a worrying indictment of the widespread use of a tech that's still plagued by constant hallucinations. Wielded without sufficient oversight by legal professionals, they could stand to alter the course of the court.

Real decisions, in other words, could be made based on the nonsensical musings of a hallucinating AI.

More on AI and courtrooms: Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court


Share This Article