Last week, OpenAI CEO Sam Altman published a blog post about how he says the company will use superhuman artificial general intelligence (AGI), the point at which AI systems are able to compete with and even exceed human intellect, to "benefit all of humanity."

The company has gotten copious amounts of attention lately with its AI chatbot ChatGPT, a versatile tool that has seen an exponential rise in popularity since its release just a few months ago.

The tool, which is based on the company's large language model (LLM) called GPT, can expertly construct responses to a staggering range of prompts, an ability that has fooled users into thinking it's sentient or has a personality.

In reality, however, LLMs have a very long way to go until they're able to compete with the intellect of a human being — which is why several experts are calling foul on Altman's recent blog post, calling it meaningless and misleading.

After all, AGI is a vague term, borrowed from the realm of science fiction, referring to something that simply doesn't exist yet. In fact, we haven't even settled on a common definition.

"The term AGI is so loaded, it's misleading to toss it around as though it's a real thing with real meaning," Bentley University mathematics professor Noah Giansiracusa argued in a tweet. "It's not a scientific concept, it's a sci-fi marketing ploy."

"AI will steadily improve, there's no magic [moment] when it becomes 'AGI,'" he added.

In a Twitter thread, University of Washington linguistics professor Emily Bender took apart Altman's blog post bit by bit.

"From the get-go this is just gross," she argued. "They think they are really in the business of developing/shaping 'AGI.' And they think they are positioned to decide what 'benefits all of humanity.'"

Bender also pointed out Altman's "rhetorical sleight of hand," starting by treating AGI as a hypothetical, but immediately turning it into "something that has 'potential.'"

By the end of his blog post, Altman goes as far as to claim that AGI is unpreventable and "would also come with serious risks of misuse, drastic accidents and social disruption."

In short, Bender says, Altman is getting far too ahead of himself and is positioning his company as having laid the foundation of an early AGI.

"Your system isn't AGI, it isn't a step towards AGI, and yet you're dropping that in as if the reader is just supposed to nod along," Bender argued.

To the linguist, OpenAI's recent decision to transform itself from an open-source platform into a profit-maximizing, capitalist entity is really making itself apparent.

In his blog post, Altman argues that we need to "put regulation in place," which could allow "society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low."

But that's beside the point, Bender argued, especially given OpenAI's positioning as a private company that isn't beholden to make its motives apparent to the world.

"The problem isn't regulating 'AI' or future 'AGI,'" she wrote. "It's protecting individuals from corporate and government overreach using 'AI' to cut costs and or deflect accountability."

"There are harms NOW: to privacy, theft of creative output, harms to our information ecosystems, and harms from the scaled reproduction of biases," Bender added. "An org that cared about 'benefitting humanity' wouldn't be developing/disseminating tech that does those things."

More on OpenAI: Elon Musk Recruiting Team to Build His Own Anti-"Woke" AI to Rival ChatGPT


Share This Article