
OpenAI has released a new smartphone app — currently invite-only — designed to rival TikTok with an infinite barrage of AI slop.
The app accompanies the company’s latest text-to-video and audio AI generator, Sora 2, which it claims is “more physically accurate, realistic, and more controllable than prior systems.”
A two-minute clip celebrating the announcement was met with predominantly negative reactions, with netizens dismissing it as “unsettling” and “soulless.”
Worse yet, facilitating the AI generation of photorealistic videos could have some concerning implications, especially when it comes to impersonation.
Ironically, OpenAI’s own Sora developer, Gabriel Petersson, demonstrated how easy it was to generate CCTV footage of anyone — in this case, OpenAI CEO Sam Altman — “stealing [graphics cards] at Target.”
The clip shows Altman getting caught by a nearby security guard after trying to walk out of a store with a GPU box — a gag meant to poke fun at the company’s frantic multibillion-dollar bids to secure AI hardware. Specialized AI hardware has become an extremely hot commodity, with AI chipmaker Nvidia announcing a $100 billion partnership with OpenAI just last week.
But the light ribbing of a tech executive aside, the video paints a dystopian picture of a future where anybody could easily be framed for a crime they didn’t commit.
People were quick to point out that Petersson’s gaffe — which was followed by several other videos of Altman sleeping in an office chair, or making people dance on a train platform — felt tone-deaf.
“OpenAI employees are very excited about how well their new AI tool can create fake videos of people doing crimes and have definitely thought through all the implications of this,” Washington Post reporter Drew Harwell posted on Bluesky.
“Every defense attorney now has a pre-written motion when it comes to video evidence, I see,” another user commented.
We’ve already seen instances of law enforcement using AI-powered facial recognition to identify perpetrators, despite glaring inaccuracies in the tech.
As WaPo reported earlier this year, officers in St. Louis used the tech to build a case against an innocent 29-year-old father of four after he was identified by an AI app, despite being warned that it “should not be used as the sole basis for any decision.” While the case was eventually dismissed, experts warn that it could set a worrying precedent.
The use of AI apps to generate transcripts of body cam videos has also raised concerns that the tech could exacerbate existing problems in law enforcement, including racism and sexism.
Now, with the advent of powerful text-to-video AI generators, like Sora 2, it’s becoming even easier to place a target at a crime scene they never visited.
For its part, OpenAI claims that its new app’s “cameo” feature — which allows you to “drop yourself straight into any Sora scene” — will protect regular people from having their appearance show up in AI-generated videos.
“With cameos, you are in control of your likeness end-to-end with Sora,” the company’s announcement reads. “Only you decide who can use your cameo, and you can revoke access or remove any video that includes it at any time.”
“Videos containing cameos of you, including drafts created by other people, are viewable by you at any time,” OpenAI promised.
The company also said that it’s taking “measures to block depictions of public figures” (whether Altman consented to Petersson’s videos remains unclear) and that “every video, profile, and comment can be reported for abuse, with clear recourse when policies are violated.”
It’s too early to tell how all of this will play out. But the sheer fact that the company’s own employees are already demonstrating how easy it is to generate fake videos of innocent people committing crimes doesn’t bode well.
OpenAI has already struggled greatly to implement effective guardrails when it comes to its large language models. It remains to be seen whether Sora will be any different in that respect.
More on OpenAI: OpenAI Ridiculed for Its Latest Cash Grab