Closed Book

OpenAI Researcher Quits, Saying Company Is Hiding the Truth

It's not letting potentially damning research get out there.
Frank Landymore Avatar
OpenAI is making it hard for its researchers to publish research that tells the truth of AI's potentially negative economic impact.
Illustration by Tag Hartman-Simkins / Futurism. Source: Chip Somodevilla / Getty Images

OpenAI has long published research on the potential safety and economic impact of its own technology.

Now, Wired reports that the Sam Altman-led company is becoming more “guarded” about publishing research that paints an inconvenient truth: that AI could be bad for the economy.

The perceived censorship has become such a point of frustration that at least two OpenAI employees working on its economic research team have quit the company, according to four Wired sources.

One of these employees was economics researcher Tom Cunningham. In his final parting message shared internally, he wrote that the economic research team was veering away from doing real research and instead acting like its employer’s propaganda arm.

Shortly after Cunningham’s departure, OpenAI’s chief strategy officer Jason Kwon sent a memo saying the company should “build solutions,” not just publish research on “hard subjects.”

“My POV on hard subjects is not that we shouldn’t talk about them,” Kwon wrote on Slack. “Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes.”

The reported censorship, or at least hostility towards pursuing work that paints AI in an unflattering light, is emblematic of OpenAI’s shift away from its non-profit and ostensibly altruist roots as it transforms instead into a global economic juggernaut. 

When OpenAI was founded in 2016, it championed open-source AI and research. Today its models are close-sourced, and the company has restructured itself into a for-profit, public benefit corporation. Exactly when is unclear, but reports also suggest that the private entity is planning to go public at a $1 trillion valuation, anticipated to be one of the largest initial public offerings of all time.

Though its non-profit arm remains nominally in control, OpenAI has garnered billions of dollars in investment, has signed deals that could bring in hundreds of billions of more, while also entering contracts to spend just as dizzying amounts of money. OpenAI gets AI chipmaker to agree to invest up to $100 billion in it on one end, and says it will pay Microsoft up to $250 billion for its Azure cloud services on the other.

With that sort of money hanging in the balance, it has billions of reasons why it wouldn’t want to release findings that shake the public’s already wavering belief in its tech — as many fear its potential to destroy or replace jobs, not to mention talk of an AI bubble or existential risks to humankind from the tech.

OpenAI’s economic research is currently overseen by Aaron Chatterji, According to Wired, Chatterji led a report released in September which showed how people around the world used ChatGPT, framing it as proof of how it created economic value by increasing productivity. If that seems suspiciously glowing, an economist who previously worked with OpenAI and chose to remain anonymous alleged to Wired that it was increasingly publishing work that glorifies its own tech.

Cunningham isn’t the only employee to leave the company over ethical concerns of its direction. William Saunders, a former member of OpenAI’s now-defunct “Superalignment” team, said he quit after realizing it was “prioritizing getting out newer, shinier products” over user safety. After departing last year, former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, highlighting how ChatGPT appeared to be driving its users into mental crises and delusional spirals. Wired noted that OpenAI’s former head of policy research Miles Brundage complained after leaving last year that it became “hard” to publish research “on all the topics that are important to me.”

More on OpenAI: Sam Altman Says Caring for a Baby Is Now Impossible Without ChatGPT

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.