That was fast.

Safety First

Less than ten days after dissolving its safety-oriented Superalignment team, OpenAI has announced a new safety board — with embattled CEO Sam Altman at its helm.

In a statement, OpenAI announced that it was creating a new "safety and security committee," an offshoot of its now-infamous board of directors that along with Altman includes the body's chair, Bret Taylor, Quora CEO and cofounder Adam D’Angelo, and corporate attorney Nicole Seligman.

The original Superalignment team, which had been announced less than a year ago, was meant to "steer and control AI systems much smarter than us," but was dissolved earlier this month.

This new team's creation also comes after the exits of several prominent OpenAI executives, including Superalignment chief Jan Leike — who just announced that he's joining his fellow company expats at Anthropic — and team cofounder Ilya Sutskever.

In a lengthy thread on X-formerly-Twitter, Leike echoed the Superalignment team's stated ethos and suggested that OpenAI has veered from that path.

"Stepping away from this job has been one of the hardest things I have ever done," he wrote, "because we urgently need to figure out how to steer and control AI systems much smarter than us."

Model Behavior

The dissolution and creation of a new Altman-led safety team highlights the turbulence that has been rocking the company behind the scenes, raising questions over the CEO's own role and ability to delegate matters central to his company's mission.

Leike had previously accused OpenAI of abandoning its responsibilities in a series of posts earlier this month, with safety taking a "backseat to shiny products."

In the latest announcement, meanwhile, OpenAI teased that it has "recently begun training its next frontier model," though there's no word yet whether that model is GPT-5, the much-anticipated update to the large language model that undergirds ChatGPT.

"We anticipate the resulting systems to bring us to the next level of capabilities on our path to [artificial general intelligence]," the statement reads.

Unlike the debacle last fall that saw Altman fired and promptly reinstated — affectionately referred to as the "turkey-shoot clusterfuck" — we don't know what's going on behind the scenes at OpenAI.

We still can't say why the Superalignment team was dissolved and replaced in such a manner. Was it a cost-cutting measure or was the team's dissolution the result of internal disagreements? Was it both?

Given that Altman is leading the new one and the person who ran its predecessor is now at a rival firm, we know one thing for certain: there's likely plenty of drama.

More on OpenAI drama: OpenAI Safety Worker Quit Due to Losing Confidence Company "Would Behave Responsibly Around the Time of AGI"


Share This Article