A new deepfake-generating system called FSGAN can swap people’s faces in real-time, without needing the extensive training that an AI algorithm normally requires to learn what a specific face looks like.
That means that deepfakes could soon proliferate more rapidly than ever before, according to Motherboard, because creating the deceptive, manipulated videos now requires less technical know-how than ever.
The system, designed by scientists from Israel’s Bar-Ilan University, relies on a target video of a person, the movements and expressions of which are then mapped onto someone else’s face. With this system, only a photo of the deepfake’s target is required to make them say whatever the deepfake creator wants, according to research published to the preprint server ArXiv on Friday.
In a demo video, it’s clear that the results aren’t perfect. The background wobbles and the celebrity pairings shown off — like Alfonso Cuaron crossed with Regina King at the 1:13 mark — are more comical than anything. But it’s easy to see how a bad actor could easily use this tech to create believable propaganda.
Show The World
The engineers behind this new deepfake tech said they’re sharing their code with the world because hiding it away wouldn’t stop similar tech from popping up, per Motherboard.
That’s a common line from deepfake developers: that they’re publishing their deepfake tech so someone — meaning someone else — can look it over and develop some sort of deepfake-detecting countermeasure. If only there was an easier way, like perhaps not building consumer-friendly deepfakes in the first place.
READ MORE: This Program Makes It Even Easier to Make Deepfakes [Motherboard]
More on deepfakes: Congress Is Officially Freaking Out About Deepfakes