"It’s going to get worse — a lot worse — before it gets better."

Fake AF

A striking number of viral photos on both sides of the Isreal-Hamas conflict have been revealed to be AI fakes — and according to experts, the problem is only going to get worse.

In interviews with the Associated Press, researchers from multiple firms and organizations tasked with verifying the truthfulness of online claims said that there has been a grim influx of faked AI images featuring butchered children that are used to cast blame on each side of the bloody conflict between Israel and the terrorist group Hamas.

Imran Ahmed, the CEO of the Center for Countering Digital Hate, said that whether people share out-of-context images from the long list of previous conflicts in Israel/Palestine or newer digital dupes  — or in some cases a combination of the two — the heartstring-tugging effect is the same.

"People are being told right now: Look at this picture of a baby," Ahmed said. "The disinformation is designed to make you engage with it."

While this is far from the first conflict in which AI-manipulated propaganda has been used — Russia's invasion of Ukraine has also brought about a similar spate of digital fakes — the fake photos of dead or injured children are not only a particularly gruesome instance of the technology's cruel power, but also a harbinger of worse things to come.

"It’s going to get worse — a lot worse — before it gets better," Jean-Claude Goldenstein, the CEO of the digital verification firm CREOpoint that has created a database of viral Gaza deepfakes, told AP. "Pictures, video, and audio: with generative AI, it’s going to be an escalation you haven’t seen."

Layers of Deception

Make no mistake: horrific numbers of children have died in the bloody conflict. Muddying the waters with fake images distorts that reality in numerous ways — not the least by giving people who see real documentation of the horrors of war an excuse to dismiss it as an AI fabrication.

And to make matters even more complicated, tools meant to detect whether a photo is real or AI-manipulated can sometimes get it wrong — an increasingly well-known issue that bad actors can also exploit to sow further discord in a conflict that has no shortage of it.

With dedicated disinformation artists always a step ahead of efforts to debunk them, digital security expert David Doermann, who used to lead efforts at the Pentagon's Defense Advanced Research Projects Agency (DARPA) on the national security risks of AI disinformation, told the AP that there need to be efforts between governments and public and private sectors not just to beef up their tech, but to apply more stringent regulations and standards as well.

"Every time we release a tool that detects this, our adversaries can use AI to cover up that trace evidence," Doermann, now a professor at the University of Buffalo, told AP. "Detection and trying to pull this stuff down is no longer the solution. We need to have a much bigger solution."

More on AI manipulation: Benzinga Retracts "Interview" With Rapper That Was Allegedly AI-Generated


Share This Article