DARPA Spent $68 Million on Technology to Spot Deepfakes

New military algorithms can tell whether a video was doctored, but DARPA think it’s losing the fight.

11. 19. 18 by Dan Robitzski
Victor Tangermann
Image by Victor Tangermann

Fake News

DARPA, the U.S. military’s research division, is hell-bent on fighting high-tech forgeries. In the past two years, it’s spent $68 million on digital forensics technology to flag them.

For DARPA, spotting and countering “deepfakes” — doctored images and videos that can be used to generate propaganda and deceptive media like fictionalized political speeches or pornography — is a matter of national security, reports the Canadian Broadcasting Corporation.

Losing Battle

For all of its hard work to spot edited videos, DARPA may still be fighting a losing battle. That’s according to Hany Farid, a computer scientist and digital forensics expert at Dartmouth College.

“The adversary will always win, you will always be able to create a compelling fake image, or video, but the ability to do that if we are successful on the forensics side is going to take more time, more effort, more skill and more risk,” Farid told the CBC.


The CBC acquired some videos and images of DARPA’s deepfake-spotting artificial intelligence in action and it’s not the most impressive highlight reel.

In one, the algorithm is able to go frame-by-frame to spot discrepancies. This is an impressive feat for an automated system, but the video in question is clearly edited — it’s nowhere near as sophisticated as the most advanced deepfakes out there, which can imitate a real person’s voice and facial expressions.

Putting a Finger on it

Other examples obtained by CBC showed off DARPA’s algorithm spotting inconsistencies in lighting and other subtle cues that give away a doctored video.

People who want to create misleading deepfakes may always be a step ahead of the people trying to stop them, but DARPA’s digital forensics program still has another two years of research ahead of it.


As long as they stay on top of new deepfake techniques and keep their algorithms on the cutting edge, they may be able to spot all but the most advanced, high-budget and high-tech productions.

That way, when a video of Donald Trump announcing a nuclear strike plunges the world into the apocalypse, we’ll at least know it was the real thing.

READ MORE: A new ‘arms race’: How the U.S. military is spending millions to fight fake images [CBC]

More on DARPA’s deepfake research: If DARPA Wants To Stop Deepfakes, They Should Talk To Facebook And Google


Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy


Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.