Creation Myth

OpenAI’s Latest AI Was Created Using “Itself,” Company Claims

Is the singularity upon us?
Frank Landymore Avatar
A colorful Möbius strip with a gradient of green, yellow, and pink hues on a solid bright red background. The strip has a smooth, slightly reflective surface.
Getty / Futurism

On Thursday, OpenAI announced GPT-5.3-Codex, a new coding model that’s purportedly 25 percent faster than its predecessor, along with other impressive benchmarks.

The real head-turner, however, is another claim the Sam Altman-led company made about its development. GPT-5.3-Codex, supposedly, is its first model “that was instrumental in creating itself,” with its team “blown away” by the results.

Is the Singularity nearly upon us? Is this the long-awaited sign of recursive self-improvement, a point at which the machines are finally able to continually rewrite their own code to transform themselves into superior beings?

Not quite. In less sensational wording, OpenAI restates the AI’s role as being used to “accelerate its own development.” What that looked like, according to the blog post, is that “the Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

In other words, GPT-5.3.-Codex helped the human coders with some of their tasks — impressive, maybe, but hardly a sign of humankind’s immediate obsolescence.

The imaginations of AI boosters, nonetheless, went into overdrive. The responses to a post on the r/singularity subreddit about the news were filled with a mixture of doom-tinged hype, hype-tinged doom, and a good deal of gallows humor. “I hope everyone remembers how good a mid level manager I was before the machines came,” one user wrote. The sentiment extended to X. “Holy moly — so it begins!” tweeted another user, who runs an AI newsletter.

Taken together, it’s a sign of how the discourse around AI is still dominated by sci-fi concepts and rhetoric and distorted by hype. AI companies have been more than occasionally guilty of stoking the flames. Last month, the head of Anthropic’s Claude Code Boris Cherny claimed that “pretty much” 100 percent of the company’s code is now AI-generated using its own model. What this actually looks like behind the scenes, though, remains hazy. And it’s still far removed from the idea of AI models building new models completely autonomously.

Anthropic, regardless, isn’t alone. With the new Codex release, Altman — who’s been on an emotional rollercoaster as of late — added to the pile with a sob story about how his coding tool was so good that it made him depressed.

“I built an app with Codex last week. It was very fun,” Altman tweeted. “Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of. I felt a little useless and it was sad.” Who said the “vibe” in “vibe-coding” had to be good?

More on AI: Investors Concerned AI Bubble Is Finally Popping

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.