The law firm Morgan & Morgan has rushed out a stern email to its attorneys after two of them were caught citing fake court cases invented by an AI model, Reuters reports.
Sent earlier this month to all of its over 1,000 lawyers, the email warns at length about the tech's proclivity for hallucinating. But the pros of the tech, apparently, still outweigh the cons; rather than banning AI usage — something that plenty of organizations have done — Morgan & Morgan leadership take the middle road and give the usual spiel about please double-checking your work to ensure it's not totally made-up nonsense.
"As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified," the email reads. "The integrity of your legal work and reputation depend on it."
Last week, a federal judge in Wyoming admonished two Morgan & Morgan lawyers for citing at least nine instance of fake case law in court filings submitted in January. Threatened with sanctions, the embarrassed lawyers blamed an "internal AI tool" for the mishap, and pleaded the judge for mercy.
"When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI in legal work, told Reuters.
The judge hasn't decided whether he'll punish the lawyers yet, per Reuters. Nonetheless, it's an enormous embarrassment for the relatively well-known firm, especially given the occasion. The lawsuit in question is against the world's largest company, Walmart, alleging that a hoverboard that the retailer sold was responsible for a fire that burned down the plaintiff's home. Now, the corporate lawyers are probably cackling to themselves in a backroom somewhere, their opponents having shot themselves in the foot so spectacularly.
Anyone familiar with the shortcomings inherent to large language models could've seen something like this happening from a mile away. And according to Reuters, the tech's dubious usage in legal settings has already led to lawyers being questioned or disciplined in at least seven cases in the past two years.
It's not just the hallucinations that are so pernicious — it's how authoritatively the AI models lie to you. That, and the fact that anything that promises to automate a task more often than not tends to induce the person using it to let their guard down, a problem that's become pretty apparent in self-driving cars, for example, or in news agencies that have experimented with using AI to summarize stories or to assist with reporting. And so organizations can tell their employees to double-check their work all they want, but the fact remains that screw-ups like these will keep happening.
To address the issue, Morgan & Morgan are requiring attorneys to acknowledge they're aware about the risks associated with AI usage by clicking a little box added to its AI tool, The Register reported. We're sure that'll do the trick.
More on AI: New York Times Encourages Staff to Create Headlines Using AI
Share This Article