DARPA, the U.S. military’s research division, is hell-bent on fighting high-tech forgeries. In the past two years, it’s spent $68 million on digital forensics technology to flag them.
For DARPA, spotting and countering “deepfakes” — doctored images and videos that can be used to generate propaganda and deceptive media like fictionalized political speeches or pornography — is a matter of national security, reports the Canadian Broadcasting Corporation.
For all of its hard work to spot edited videos, DARPA may still be fighting a losing battle. That’s according to Hany Farid, a computer scientist and digital forensics expert at Dartmouth College.
“The adversary will always win, you will always be able to create a compelling fake image, or video, but the ability to do that if we are successful on the forensics side is going to take more time, more effort, more skill and more risk,” Farid told the CBC.
The CBC acquired some videos and images of DARPA’s deepfake-spotting artificial intelligence in action and it’s not the most impressive highlight reel.
In one, the algorithm is able to go frame-by-frame to spot discrepancies. This is an impressive feat for an automated system, but the video in question is clearly edited — it’s nowhere near as sophisticated as the most advanced deepfakes out there, which can imitate a real person’s voice and facial expressions.
Other examples obtained by CBC showed off DARPA’s algorithm spotting inconsistencies in lighting and other subtle cues that give away a doctored video.
People who want to create misleading deepfakes may always be a step ahead of the people trying to stop them, but DARPA’s digital forensics program still has another two years of research ahead of it.
As long as they stay on top of new deepfake techniques and keep their algorithms on the cutting edge, they may be able to spot all but the most advanced, high-budget and high-tech productions.
That way, when a video of Donald Trump announcing a nuclear strike plunges the world into the apocalypse, we’ll at least know it was the real thing.
More on DARPA’s deepfake research: If DARPA Wants To Stop Deepfakes, They Should Talk To Facebook And Google