Oh, good.

Nuclear Catastrophe

OpenAI has created a new team whose whole job is heading off the "catastrophic risks" that could be brought on by artificial intelligence.

Oh, the irony.

In a blog post, OpenAI said its new preparedness team will "track, evaluate, forecast, and protect" against AI threats, up to and including those that are "chemical, biological, radiological, and nuclear" in nature.

In other words, the company that's at the forefront of making AI a household anxiety,  while also profiting hugely off of the technology. claims it's going to mitigate the worst things AI could do — without actually explaining how it plans to do that.

Why So Serious

Besides the aforementioned doomsday scenarios, the preparedness team will work on heading off "individual persuasion" by AI — which sounds a lot like tamping down the tech's burgeoning propensity for convincing people to do things they might not otherwise.

Additionally, the team will also tackle cybersecurity concerns, though OpenAI didn't go into detail about what that — or anything else the announcement mentioned, for that matter — would entail.

"We take seriously the full spectrum of safety risks related to AI," the announcement continues, "from the systems we have today to the furthest reaches of superintelligence."

We might have different definitions of what taking things "seriously" means here, because from our vantage point, working to build smarter AIs doesn't seem like a great way to make sure AI doesn't end the world. But we digress.

AI Anxiety

In its very first line, the update said that OpenAI is taking it upon itself to mitigate major risks associated with AI "as part of our mission of building safe [artificial general intelligence]."

Artificial general intelligence (AGI) is the industry term for AI that is of human- or, perhaps, superhuman intelligence — a point on which OpenAI CEO Sam Altman can't seem to agree with himself.

Altman's stance on both AGI and the potentially dangerous future of AI has been the subject of scrutiny. He's repeatedly made public comments about how anxious it seems to make him, a confounding reality, given his company's great successes and pivotal role in developing the tech.

In May, the CEO was even candid in testimony before Congress about his concerns over what could happen if AI goes off the rails.

"I think if this technology goes wrong, it can go quite wrong," Altman said at the time. "And we want to be vocal about that. We want to work with the government to prevent that from happening."

This preparedness team is likely an outcropping of the OpenAI CEO's otherwise open-ended agitating about the dangers AI could pose.

At the same time, it seems awfully strange that a firm whose explicit goal is building an AGI that benefits all would hand-wring this way about the potential of creating a "catastrophic" disaster.

More on OpenAI: OpenAI Says It’s Fine If ChatGPT Occasionally Accuses Innocent People of Crimes


Share This Article