We’ve all seen that video where AI-powered algorithms make it seem like Barack Obama is giving a speech by synthesizing his voice and facial movements into a believable video. It’s creepy and thought-provoking. And apparently it recently provoked some thoughts within the ranks of the Department of Defense’s research organization (DARPA) too.

Over the course of the summer, DARPA will fund a contest where participants will compete to create the most believable, fake AI-generated photos, videos, and audio recordings, collectively referred to as “Deepfakes,” as reported by MIT Technology Review. Contestants will also try to develop new advanced tools to detect these deepfakes, which are becoming more sophisticated as people get better at developing artificial intelligence algorithms specifically made to fool us.

In particular, DARPA is concerned by generative adversarial networks, a type of sophisticated algorithm that pits two neural networks against each other to eventually hone in on the ability to create something indistinguishable from those made by people. In this case, a world leader being made to say something in an AI-generated video versus something they actually said in a speech.

It’s easy to see why the Department of Defense is concerned — right now the president of the United States boasts about the nation’s nuclear arsenal over social media while the U.S. and Korea inch towards talks of disarmament. What would definitely not help anyone right now would be having a believable, fake video of President Trump or Supreme Leader Kim Jong Un saying they’re planning to launch missiles go viral.

But it’s not just internet pranksters or malicious enemies of the state who are making these videos. A quick scan through the libraries of Facebook and Google’s published AI research shows that both companies invested in learning how to develop algorithms that can process, analyze, and alter photos and videos of people. If DARPA wants to nip this potential digital threat in the bud, maybe they should look into what the tech giants are doing.

Some research projects are relatively benign but could be used to smooth out the glitches of an altered or fake video, like Google AI projects designed to reduce the noise of videos and make them more realistic. Some are, well, creepier, like Google’s AI algorithm that creates a neutral, front-facing photo of a person by analyzing other pictures of them.

The problem is that AI researchers often take a “can we?” instead of a “should we?” approach to making the coolest stuff possible. This is particularly relevant for a Facebook research project that found a way to animate the profile photos of its users. The researchers behind the project said that they did not consider any ethical issues or potential misuse of their work while they were building it, they just wanted to create as sophisticated a product as possible.

The problem for DARPA is that fixing this problem requires a change in attitude towards how technology is developed, and may also require a closer watch kept over tech companies with vast resources and skilled research teams at their disposal. Until then, we’re likely going to keep seeing better and better AI-generated deepfakes created just to see if they could be.


Share This Article