Last month, Tesla CEO Elon Musk's lawyers argued that 2016 recordings of him making big promises about the car's Autopilot software could have been deepfaked.
While the judge didn't buy the lawyers' arguments and ordered Musk to testify under oath, the stunt illustrates a broader trend. As generative AI-powered tools make it easier than ever before to synthesize the voices and faces of public figures, lawyers are trying to seize the opportunity to undermine the very foundation of a shared factual reality.
That has experts deeply worried, NPR reports. The phenomenon could end up influencing the beliefs of jurors and even the general population.
Hany Farid, a digital forensics expert and professor at the University of California, Berkeley, told NPR that he's worried about a future in which evidence of "police violence, human rights violations, a politician saying something inappropriate or illegal" is dismissed because of the possibility that it was digitally faked.
"Suddenly there's no more reality," he said.
Musk's lawyers weren't the first to argue that deepfakes were being invoked in court. Two defendants who were present at the January 6 riot attempted to claim videos showing them at the Capitol could've been AI manipulated, according to NPR.
Insurrectionist Guy Reffitt argued that audiovisual evidence implicating him were deepfakes, arguments that were later dismissed by a judge, with Reffitt being found guilty.
But that may not always be the case as US law is woefully ill equipped for this kind of argumentation.
"Unfortunately, the law does not provide a clear response to Reffitt’s lawyer’s reliance on deepfakes as a defense," Rebecca Delfino, associate dean at the Loyola Law School, Los Angeles, wrote in a February review paper. "But this much is clear — the 'deepfake defense' is a new challenge to our legal system’s adversarial process and truth-seeking function."
Delfino argues lawyers are using these objects to "exploit juror bias and skepticism about what is real," which is a troubling development considering the current guidance was "developed before the advent of deepfake technology."
The stakes are high. With the widespread proliferation of manipulated media, the public's trust slowly erodes as the lines between fact from fiction continue to blur.
Legal scholars Bobby Chesney and Danielle Citron dubbed this paradox the "Liar's Dividend."
"As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deepfakes," they wrote in a 2018 law review paper. "Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence."
Even more worryingly, that kind of loss of trust could be very difficult to regain.
"We really have to think about how do we inbuild some kind of security so that we can ensure that there is some degree of trust with all the digital content that we interact with on a day-to-day basis," Nina Schick, political scientist and technology consultant who wrote the book "Deepfakes," told CBS News last year.
"Because if we don't, then any idea of a shared reality or a shared objective reality is absolutely going to disappear," she added.
But given recent developments, the situation is likely to get worse before it can get better. AI tools are making it easier to generate deepfakes every year. These days, an AI can be trained on a relatively small stockpile of audiovisual data to convincingly make it look like somebody is making a statement they never made.
"What’s different is that everybody can do it now," Britt Paris, an assistant professor of library and information science at Rutgers University, told the New York Times earlier this year.
"It’s not just people with sophisticated computational technology and fairly sophisticated computational know-how," Paris addd. "Instead, it’s a free app."
That means we're already seeing a flood of deepfakes making the rounds online. So far this year, we've seen deepfaked clips of president Joe Biden declaring a national draft and podcaster Joe Rogan promoting male enhancement products on TikTok go viral.
It's not just lawyers that are leading the charge in rewiring reality. Even political parties are jumping on the bandwagon.
Last month, the Republican National Committee released an attack ad aimed at Joe Biden's reelection campaign that heavily featured AI-generated images of Taiwan being invaded by China and borders that were being "overrun" by "illegals" — all dreamed up by AI.
In the midst of all that destabilization, we should expect to see more lawyers reaching for the "deepfake defense" in the near future.
And while courts have successfully held the line so far, there's no guarantee that a judge won't eventually be swayed by a particularly convincing argument.
Share This Article