
A woman in California successfully used AI tools, including ChatGPT, to overturn her eviction notice and avoid tens of thousands of dollars in penalties over several months of litigation.
As NBC News reports, Lynn White was behind on rent and initially lost a jury trial after facing an eviction notice. Instead of continuing to work with a local tenant advocacy network, she consulted ChatGPT and the AI search platform Perplexity to represent herself in court.
That’s almost always a bad idea. But according to NBC, the chatbot identified potential errors in a judge’s procedural decisions for White, informed her what actions to take, and drafted responses to the court.
“I can’t overemphasize the usefulness of AI in my case,” she told the broadcaster. “I never, ever, ever, ever could have won this appeal without AI.”
White is one of several litigants NBC spoke to who represented themselves with the help of AI and came out on top. Another is Staci Dennett, a home fitness business owner in New Mexico, who used AI to successfully negotiate a settlement over unpaid debt.
“I would tell ChatGPT to pretend it was a Harvard Law professor and to rip my arguments apart,” she told NBC. “Rip it apart until I got an A-plus on the assignment.”
The output was eerily convincing.
“If the law is something you’re interested in as a profession, you could certainly do the job,” the opposing lawyers reportedly told her in an email.
However, the tools aren’t always successful in overturning decisions or winning legal cases. AI tools are known to spit out made-up and misleading information that could get a pro se litigant in trouble — like energy drink mogul Jack Owoc, who was sanctioned in August after filing a motion filled with hallucinated citations. Owoc was ordered to complete ten hours of community service, per NBC.
Perhaps more worryingly, even a growing number of professional lawyers have been caught red-handed submitting filings that include hallucinated court cases, resulting in penalties and embarrassment.
Case in point, just earlier this week, 404 Media reported that a New York attorney who was caught using AI in court was then caught submitting an AI-generated explanation for his gaffe.
“This case adds yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession,” the disappointed judge overseeing the case wrote in a scathing decision.
In August, a California attorney was issued a “historic” $10,000 fine for submitting an AI-generated court appeal. Twenty-one of the 23 quotes from cases it cited were found to be fabricated.
Despite the dangers of potentially misleading courts, the advent of easily accessible AI tools has led to a slew of people representing themselves in court.
“I’ve seen more and more pro se litigants in the last year than I have in probably my entire career,” Thorpe Shwer paralegal Meagan Holmes told NBC.
That’s despite companies like Google warning users outright not to rely on AI for legal advice. In its terms of service, for instance, Elon Musk’s xAI warns users not to use its services to make “high-stakes automated decisions that affect a person’s safety, legal or material rights.”
Nonetheless, existing guardrails aren’t stopping tools like ChatGPT from spitting out detailed answers when presented with queries pertaining to legal proceedings, for better or for worse.
“I can understand more easily how someone without a lawyer, and maybe who feels like they don’t have the money to access an attorney, would be tempted to rely on one of these tools,” attorney Robert Freund told NBC. “What I can’t understand is an attorney betraying the most fundamental parts of our responsibilities to our clients… and making these arguments that are based on total fabrication.”
More on lawyers using AI: Lawyer Gets Caught Using AI in Court, Responds in the Worst Possible Way