They're risk-takers, after all.

Limited Release

Sora is finally here — sort of.

OpenAI announced today that it would publicly release its much-hyped video generation AI tool to users in certain countries, after being in closed beta for nearly a year since it was first unveiled.

According to a livestream on its YouTube channel hosted by CEO Sam Altman and several of the startup's other leaders and research scientists, Sora will be open to use in the US and to "most countries internationally" — but will remain unavailable in Europe and the UK.

Those are pretty significant snubs.

"We're going to try our hardest to be able to launch there, but we don't have any timeline to share yet," Altman said, only saying that it would "a while."

Shady Safety

This isn't the first time that OpenAI has hit snags while trying to deploy its products across the pond. The rollout of its Advanced Voice Mode for ChatGPT, for example, was also delayed by several weeks, which was likely due to concerns about complying with the European Union's data privacy laws (GDPR).

With Sora, the stumbling block may once again be related to the regulatory environment.

"We obviously have a big target on our back as OpenAI, so we want to prevent illegal activity of Sora, but we also want to balance that with creative expression," Sora product lead Rohan Sahai said during the livestream.

As CNBC noted, the startup's product chief Kevin Weil revealed in a Reddit thread in October that one of the reasons that Sora hadn't been released at that point was because they still needed "to get safety/impersonation/other things right."

Sorting It Out

OpenAI hasn't elaborated on what the "illegal activity" Sahai alluded to might be — or if they're related to the staggered rollout — but Weil's comments and generative AI's checkered history of being misused provide us with some ideas.

Misinformation and disinformation are one of the largest concerns surrounding generative AI tech. If Sora doesn't have strong enough guardrails in place to prevent it from impersonating a celebrity or a politician, that could pose a major legal liability.

Similarly, this could be the case with copyright, too — as OpenAI has faced scruitiny for the provenance of its training data. But on the user-facing side of things, image generators like ChatGPT-integrated Dall-E typically refuse prompts that include the name of artists (and this is also the case with famous figures) due to these very concerns. Sora may need more disciplining in that regard.

We also can't gloss over the fact that Sora could be used to create far darker material like extremely violent imagery and child sexual abuse material (CSAM), as some people have been arrested for allegedly doing.

All of these should be major worries for OpenAI. But balancing creative freedom with safety is not easy, and one might argue that releasing these AI models in their current hallucination-prone states is already playing with fire.

More on OpenAI: OpenAI Employee Says They’ve "Already Achieved AGI"


Share This Article