
It’s becoming increasingly clear that OpenAI put staggeringly little thought into the rollout of Sora 2, its latest text-to-video generating app, a “move fast and break things” approach that has resulted in plenty of drama.
Last week, the Sam Altman-led company released the TikTok-style app that churns out endless feeds of low-rent and mind-numbing AI slop.
It’s an “unholy abomination” that intentionally encourages users to generate deepfakes of others, and possibly open themselves up to infringing copyright — resulting in a meltdown of epic proportions reminiscent of the early days of ChatGPT.
Unsurprisingly, users quickly got to work producing the most controversial material imaginable, like clips of Nickelodeon’s SpongeBob SquarePants cooking up blue crystals in a meth lab and Altman grilling the dead husk of a disturbingly lifelike Pikachu. Users were even sharing full episodes of South Park that were entirely AI-generated.
Experts were appalled at the implications of handing users the ability to generate incriminating footage of other people’s likenesses. OpenAI’s own staff member, Sora developer Gabriel Petersson, whipped up viral CCTV footage of Altman shoplifting at Target, sparking a debate over the continued erosion of trust in what we see online.
Users also quickly started creating photorealistic videos of deceased celebrities, including pop icon Michael Jackson, rapper Tupac Shakur, and even painter Bob Ross. (Despite OpenAI promising to “block depictions of public figures,” the company told PCMag that it does “allow the generation of historical figures,” implying that dead celebs are fair game.)
The company’s guardrails, designed to quell “harassment, discrimination, bullying or similar prohibited content,” initially did little to address the situation. For instance, one user shared a 1990s TV commercial for a children’s toy set themed after deceased pedophile Jeffrey Epstein’s notorious Caribbean island.
Since then, OpenAI appears to have put its foot down, with users saying that OpenAI’s changes to its content filters have now made the app “literally unusable for anything even remotely creative” and “more restrictive than North Korea.” Sora 2 users have been sharing screenshots showing that they had been slapped with multiple instances of “content violation.”
There’s an easy explanation for the U-turn: chances are that Altman’s phone has been ringing nonstop ever since the app launched, with furious lawyers demanding answers.
In an eyebrow-raising blog post, Altman claimed that companies were “very excited” that users were generating videos inspired by their intellectual property, and said that “we are going to have to somehow make money for video generation.”
His plan: pass on a cut from the proceeds to rightsholders. Where those proceeds will come from or if users will be charged per video generation remains unclear, indicating the company has put little thought into the future of its AI slop app.
“People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences,” he added. “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”
“The exact model will take some trial and error to figure out, but we plan to start very soon,” Altman claimed.
OpenAI has also made it clear that in the case of any particularly problematic content being created using its tech, its users are to blame.
The company’s “media upload agreement” requires users to agree by tapping a simple checkbox that “you have all of the necessary rights to the media you upload.”
The agreement also threatens users, saying that “misuse of media uploads may result in your account being suspended or banned without refund.”
Another telling reversal: As the Wall Street Journal reported late last month, OpenAI initially said that rights holders had to actively opt out of having their copyrighted materials appear in generations. Now it’s flipped, saying they need to opt in.
“We will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls,” Altman wrote in his blog post.
Futurism has reached out to OpenAI for more clarity on the terms of service and if any users have been banned as a result of the agreement.
In other words, it sounds like OpenAI’s main consideration before the launch was simply making sure the app was a success. Now that it’s climbed the app store charts, it’s playing cleanup by trying to put out the legal fires that popularity engendered.
More on Sora 2: OpenAI’s Sora 2 Is Generating Video of SpongeBob Cooking Meth, Highlighting Copyright Concerns