During a discussion over the future of AI at this year's World Economic Forum in Davos, moderator and CNN journalist Fareed Zakaria had a compelling question for OpenAI CEO Sam Altman.

"What’s the core competence of human beings?" he asked, raising the possibility of AI being able to replicate "our core innate humaneness," "emotional intelligence," and "empathy."

Altman's answer, however, left a lot to be desired.

"I think there will be a lot of things," Altman offered, vaguely, adding that "humans really care about what others think."

Really, Sam? Is that all that separates us from the advances of AI?

"I admit it does feel different this time," he added. "General purpose cognition feels so close to what we all treasure about humanity that it does feel different."

Altman was seemingly referring to the concept of artificial general intelligence (AGI), an ill-defined future state in which an AI could outsmart human beings at a variety of tasks. OpenAI has long stated that achieving this in a "safe" manner is its number one priority.

In his answer, Altman also argued that "humans are pretty forgiving of other humans making mistakes, but not really at all forgiving if computers make mistakes" and that we know what makes other people tick.

At the same time, "we will make decisions about what should happen in the world," and not AI. Wait, but why? What will we be doing better that will give us that right, according to his say-nothing answer?

Altman's comments are especially strange considering Altman is leading the charge in realizing AGI, a future that could greatly undermine the agency of humans.

It feels like a double standard to support the advancement of these AIs while not being sure what role old fashioned humans would play in this future world.

After being asked  what humans will  be "best at in a world of AI," Salesforce CEO Marc Benioff took a considerably different tack, suggesting that a "WEF digital moderator" could soon be "doing a pretty good job, because it's going to have access to a lot of the information we have."

However, Benioff argued that AGI is still far off, and that "today the AI is really not at a point where we’re replacing human beings."

"We are just about to get to that breakthrough, where we’re going to go ‘wow, it’s almost like a digital person," he added. "And when we get to that point, we’re going to ask ourselves do we trust it?"

It's very much in Altman's interest to sell his company's vision of an AGI that happily coexists alongside humans in harmony.

The CEO is also currently trying to raise billions of dollars from investors around the globe to, per Bloomberg, start manufacturing AI computer chips through a separate venture from OpenAI.

Even in the face of an impending disaster in the form of AI meddling with the upcoming US presidential election — a reality experts have long been warning about — Altman is trying to keep investors from panicking.

"I believe that America is gonna be fine, no matter what happens in this election," the multi-billionaire said during a Bloomberg interview in Davos earlier this week. "I believe that AI is going to be fine, no matter what happens in this election, and we will have to work very hard to make it so."

That's despite admitting that AI is "bigger than just a technological revolution" and that it "already has" become a "political issue."

Is Altman trying to have his cake and eat it, too? Nobody really knows what an AGI-fueled future will look like — if it ever materializes, that is.

And given his unconvincing stance on the question of what will set humans apart in this future, Altman likely doesn't, either.

More on Altman: Sam Altman Says Human-Tier AI Is Coming Soon


Share This Article