Focal Points

The Ever-So-Ethical OpenAI Just Replaced Its “Core Values” With Completely Different Ones

Oh, okay.
Noor Al-Sibai Avatar
OpenAI quietly changed its "core values" list to include a focus on AGI that wasn't explicitly listed there before. 
SUN VALLEY, IDAHO - JULY 11: Sam Altman, CEO of OpenAI, speaks to the media as he arrives at the Sun Valley Lodge for the Allen & Company Sun Valley Conference on July 11, 2023 in Sun Valley, Idaho. Every July, some of the world's most wealthy and powerful businesspeople from the media, finance, technology and political spheres converge at the Sun Valley Resort for the exclusive weeklong conference. (Photo by Kevin Dietsch/Getty Images) Image: Kevin Dietsch/Getty Images

Artificial, In General

In recent weeks, OpenAI quietly changed its “core values” list to include a focus on artificial general intelligence (AGI) that wasn’t explicitly listed there before.

As Semafor reports, the firm as recently as September 21 listed its values on a job openings page as “Audacious,” “Thoughtful,” “Unpretentious,” “Impact-driven,” “Collaborative” and “Growth-oriented,” per a snapshot of the page on the Internet Archive.

When one goes to the page now, though, they’ll see an entirely different list that includes “AGI focus” as its very first value. Its other new core values: “Intense and scrappy,” “Scale,” “Make something people love,” and “Team spirit.”

Sure, it’s all corporate blather. But you can’t help but wonder: if you can replace all your core values at the drop of a hat, were they really core values to begin with?

Whiplash!

And let’s zoom in on that number one core value, “AGI Focus,” because it’s a perfect example of how the company’s favorite terms can feel like works in progress.

In February, OpenAI’s inscrutable CEO Sam Altman wrote in a company blog post that AGI can broadly be defined as “systems that are generally smarter than humans,” but in a wide-ranging New York Magazine interview published last month, he’d downgraded the definition to AI that could serve as the “equivalent of a median human that you could hire as a co-worker.”

So which is it, then? Does OpenAI and its CEO think that AGI, its purported new core value, will be comprised of superhuman artificial intelligence, or is it an AI that’s just about as smart as the average person?

Shifting Tides

While we’ve reached out to OpenAI to clear up that pretty wide discrepancy in definition, the answer may not become clear anytime soon given that the company’s goals have, like its values, shifted over time as well.

Founded in 2015 by Altman, Elon Musk, and a handful of others who are by and large no longer affiliated, OpenAI was created as a nonprofit research lab that was meant, essentially, to build good AI to counter the bad. Though the firm still pays lip service to that original goal, its drift away from nonprofit AI do-gooders to a for-profit endeavor led to Musk’s exit in 2019, and that purpose-shifting appears to have bled into its self-descriptions as well.

So yeah, the new core values list feels like a bunch of fluff. It does, however, seem to show OpenAI’s hand when it comes to its single-minded focus looking forward.

“We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity’s future,” the OpenAI job postings page now explains. “Anything that doesn’t help with that is out of scope.”

More on OpenAI: AI’s Electricity Use Is Spiking So Fast It’ll Soon Use as Much Power as an Entire Country

Noor Al-Sibai Avatar

Noor Al-Sibai

Senior Staff Writer

I’m a senior staff writer at Futurism, where my work covers medicine, artificial intelligence and its impact on media and society, NASA and the private space sector.