Is this really a good idea?
Colombian judge Juan Manuel Padilla Garcia used the text generator to come up with questions about a recent case involving a dispute between a health insurance company and the parents of an autistic child in Cartagena, Colombia's First Circuit Court.
"The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in his decision, as translated by Vice. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."
Lies and Bias
The AI came up with legal questions about the case, which questioned whether a health insurance company should pay coverage for the child.
"Is an autistic minor exonerated from paying fees for their therapies?" the AI tool wrote. "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?"
Garcia used the AI's full responses in his decision, which is fairly astonishing considering ChatGPT isn't capable of differentiating truth from fiction, and can be heavily biased in its reasoning.
It's especially noteworthy considering that the AI startup DoNotPay recently attempted to use an automated "AI lawyer" in a courtroom — but abruptly canceled its plans when CEO Joshua Browder was threatened with jail time by prosecutors if he were to "follow through with bringing a robot lawyer into a physical courtroom."
While judge Garcia's motivations are hazy, it's worth noting, his actions could set a worrying legal precedent.
READ MORE: A Judge Just Used ChatGPT to Make a Court Decision [Vice]
Share This Article