The company admitted it "ran into an issue."
Mortal Domains
The full version of OpenAI's latest AI model called 01 appears to have leaked on Friday — only for the company to shut it down a mere two hours later.
As Tom's Guide reports, a number of users on X-formerly-Twitter discovered that a simple tweak to the URL allowed them to access the AI model.
The model was first announced in September and has only been available in "preview" form to paying users since then.
But by changing the URL, users claimed to have found a workaround to get access to the full thing.
In a statement to Futurism, an OpenAI spokesperson confirmed that "we were preparing limited external access to the OpenAI o1 model and ran into an issue."
"This has now been fixed," the statement reads.
Glimpses users have gotten so far suggest that the full release could mark a serious improvement over anything we've seen from the company previously.
Let 'Er Rip
Users were initially impressed by the purported model's capabilities, from solving a complex math problem to an image puzzle.
One user found that the AI model could spit out a "full 01 Chain of Thought" after being asked to analyze a picture of a recent SpaceX launch.
Another Reddit user found that it "managed to process a massive JSON dump that wasn’t feasible with o1-preview due to its token limitations," referring to a common file format that coders used to store human-readable text.
We still don't know when OpenAI will make the full version of its o1 model available to users. As Tom's Guide points out, the Sam Altman-led firm may be waiting out the current US presidential election this week.
But even in its "preview form," the o1 model has already impressed experts with its improved ability to solve standardized tests and new chain-of-thought reasoning.
Despite some impressive benchmarks, though, OpenAI recently found that its ability to provide correct — and not "hallucinated" — answers still leaves plenty to be desired.
Whether that will change at all with the public release of the full version of the o1 model remains to be seen.
Update: The piece has been updated with a statement from OpenAI.
More on OpenAI: OpenAI Research Finds That Even Its Best Models Give Wrong Answers a Wild Proportion of the Time
Share This Article