As Facebook and Instagram owner Meta seeks to jam generative AI into every feasible corner of its products, a disturbing Forbes report reveals that the company is failing to prevent those same products from flooding with AI-generated child sexual imagery.

As Forbes reports, image-generating AI tools have given rise to a disturbing new wave of sexualized images of children, which are proliferating throughout social media — the Forbes report focused on TikTok and Instagram — and across the web.

The images generally feature young, pre-adolescent girls, many of whom are suggestively dressed; the followers of social media accounts who post and interact with the images are overwhelmingly older men who leave disgusting and sexually-charged messages in the comments section. But, while objectively horrendous, the images currently lie in a legal gray area.

Though the images are trained on photos of real children — and at least in the case of the AI image generator Stable Diffusion, off of images of real child sexual abuse material (CSAM), according to a December study conducted by the Stanford Internet Observatory — the resulting image is that of a fake human. And while the images are sexualized, they're not necessarily overtly explicit, ultimately rendering them legal (explicit CSAM is always illegal, even when fake.) Predators appear to be using this fuzzy gray area as a vehicle to explore their darkest fantasies — a practice that experts fear may ultimately result in real-world exploitation of kids.

"Child erotica is the gateway," Heather Mahalik Barnhart, a digital forensics expert, told Forbes, adding that "when you look at who's following it, it’s not normal."

Obviously, it's awful that this stuff is anywhere. That these images are spreading throughout Instagram, though, presents a particularly troubling irony for Meta, a company that's desperately trying to keep up in the AI race.

So far, the tech giant's multi-billion-dollar AI efforts — a chatbot that disparaged company CEO Mark Zuckerberg, bizarre AI versions of celebrity characters, a news-summarizing AI assistant, and so on — have yet to make a significant cultural splash. Meanwhile, it's clearly unable to use the tools at its disposal, AI included, to help stop harmful AI content created using similar tools to those that Meta is building from disseminating across its own platforms. We were promised creativity-boosting innovation. What we're getting at Meta is a platform-eroding pile of abusive filth that the company is clearly unable to manage at scale. (On a similar note, have you scrolled through Facebook lately?)

For its part, per Forbes, Meta permanently removed the specific accounts that the reporters found from Instagram. A spokesperson for the company also insisted that it removes real and fake CSAM from its platforms and reports the images to the National Center for Missing and Exploited Children (NCMEC). (TikTok also removed Forbes-flagged accounts, with a spokeperson telling Forbes that the social media company has "strict policies against AI-generated content of minors to protect young people and keep TikTok inhospitable to behavior that seeks to harm them.")

But social media companies going account-by-account to manually remove this stuff when reporters specifically flag it isn't exactly a scalable solution. And as director NCMEC's CyberTipline Fallon McNulty told Forbes, because "these are all being trained off of images of real children, whether depicted in full likeness or not," the "NCMEC doesn't really see that side of the argument that there's a world where this is okay."

"Especially given some of the commentaries that the images are attracting, it doesn't sound like the audience that is reviewing, ingesting, consuming those images is innocent," McNulty continued, adding that she "would hope to see" social media platforms "removing that content and not creating a space on their platforms for individuals to be sexualizing children."

More on AI: Google Strategist Quits, Slams Company's AI Work as Motivated by Greed and Fear


Share This Article