
The launch of OpenAI’s Sora 2 has been, at best, incredibly chaotic. When it rolled out last week, the company’s latest text-to-video generating app became an instant smash hit among fans who used it to churn out loads of delirious and edgy AI slop.
Many of these videos feature recognizable characters like SpongeBob cooking meth, raising the obvious question of whether the AI company was flagrantly ignoring copyright law. And as tons of Sora-made videos parodying Altman hit the web, including some that fake CCTV footage showing him committing crimes, the implication that the tech could easily be used to fabricate damaging videos of people without their permission couldn’t be ignored.
On that note, it seems that Sora has found another nightmarish use: stalking.
Barely a day after Sora 2 launched, the journalist Taylor Lorenz said that a “psychotic stalker” was already using OpenAI’s tool to make AI videos of her, while allegedly running hundreds of accounts dedicated to her.
“It is scary to think what AI is doing to feed my stalker’s delusions,” Lorenz wrote in a tweet. “This is a man who has hired photographers to surveil me, shows up at events I am at, impersonates my friends and family members online to gather info.”
The tool, thankfully, allowed Lorenz to block and delete unapproved videos with her image from the app, she said. But the stalker may have already downloaded the AI creations — and it’s alarming that they were even allowed in the first place.
While Lorenz’s stalker went to extreme lengths, that Sora is being used for stalking and harassment will likely not be an edge case, because deepfaking yourself and others into videos is one of its core selling points. OpenAI innocuously calls these deepfakes “Cameos,” which are “reusable characters” that’re synthesized from videos of yourself that you upload to the app. You can also use other peoples’ Cameos — with their permission, of course.
But Sora 2’s guardrails aren’t reliable. Some users have already shown that they could generate risque and suggestive videos of people. And in Sora 2’s system card, OpenAI admitted that the AI app failed to block prompts for generating a video with nudity or sexual content using a real person’s likeness 1.6 percent of the time, as PCMag spotted. That’s a lot of videos slipping through the cracks across millions of prompts.
It’s unclear how Lorenz’s stalker was able to allegedly use Sora to deepfake her. OpenAI claims that the app blocks users from uploading photos with faces. But image generating AI tools have been used to sexually harass people, usually women, ever since they went mainstream. Pop star Taylor Swift was targeted by a wave of fake “nudes” of her last year that were generated with AI.
And regular folks aren’t safe, either. One stalker allegedly used AI to generate fake nudes of a women he was harassing and setup a chatbot that imitated her likeness, and another was accused of creating pornographic AI videos of nearly a dozen of his victims and sending the videos to their families.
These videos are becoming easier to make and are more convincing than ever. And it doesn’t help that Altman has undercut the seriousness of deepfaking your friends by letting his fans run wild with his own likeness. OpenAI, seemingly, would like us all to quickly get used to the idea of spoofing realistic videos of each other — made possible by black box algorithms codifying the features of your face — as just a bit of harmless fun.
More on OpenAI: OpenAI’s Huge New Project Is Running Into Trouble Behind the Scenes