Just a few days after the full release of OpenAI's o1 model, a company staffer is now claiming that the company has achieved artificial general intelligence (AGI).

"In my opinion," OpenAI employee Vahid Kazemi wrote in a post on X-formerly-Twitter, "we have already achieved AGI and it’s even more clear with O1."

If you were anticipating a fairly massive caveat, though, you weren't wrong.

"We have not achieved 'better than any human at any task,'" he continued, "but what we have is 'better than most humans at most tasks.'"

Critics will note that Kazemi is seizing on a convenient and unconventional definition of AGI. He's not saying that the company's AI is more effective than a person with expertise or skills in a certain task, but that it can do such a variety of tasks — even if the end result is dubious — that no human can compete with the sheer breadtth.

A member of the firm's technical staff, Kazemi went on to muse about the nature of LLMs and whether or not they're simply "following a recipe."

"Some say LLMs only know how to follow a recipe," he wrote. "Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify."

While that does come off somewhat defensive, it also gets to the heart of OpenAI's public outlook: that simply pouring more and more data and processing power into existing machine learning systems will eventually result in a human-level intelligence.

"Good scientists can produce better hypothesis [sic] based on their intuition, but that intuition itself was built by many trial and errors," Kazemi continued. "There’s nothing that can’t be learned with examples."

Notably, this missive was made right after news broke that OpenAI had removed "AGI" from the terms of its deal with Microsoft, so the business implications of the assertion are unclear.

One thing's for sure, though: we haven't yet seen an AI that can compete in the labor force with a human worker in any serious and general way. If that happens, the Kazemis of the world will have earned our attention.

More on AGI: AI Safety Researcher Quits OpenAI, Saying Its Trajectory Alarms Her


Share This Article