Is that good?
Good As Dead
There's no shortage of AI doomsday scenarios to go around, so here's another AI expert who pretty bluntly forecasts that the technology will spell the death of us all, as reported by Bloomberg.
This time, it's not a so-called godfather of AI sounding the alarm bell — or that other AI godfather (is there a committee that decides these things?) — but a controversial AI theorist and provocateur known as Eliezer Yudkowsky, who has previously called for bombing machine learning data centers. So, pretty in character.
"I think we're not ready, I think we don't know what we're doing, and I think we're all going to die," Yudkowsky said on an episode of the Bloomberg series "AI IRL."
Completely Clueless
Some beliefs of AI-apocalypse are more ridiculous than others, but Yudkowsky, at the very least, has seriously maintained them for decades. And recently, his AI doom-mongering has become more in fashion as the industry has advanced at a breakneck pace, making guilt-stricken Oppenheimers out of the prominent computer scientists who paved the way.
To add to the general atmosphere of gloom, these fears — though usually less radically — have been echoed by leaders and experts in the AI industry, many of whom supported a temporary moratorium on advancing the technology past the capabilities of GPT-4, the large language model that powers OpenAI's ChatGPT.
In fact, that model is one of Yudkowsky's chief concerns.
"The state of affairs is that we approximately have no idea what's going on in GPT-4," Yudkowsky claimed. "We have theories but no ability to actually look at the enormous matrices of fractional numbers being multiplied and added in there, and [what those] numbers mean."
Deflecting the Issue
These fears are no doubt worth considering, but as some critics have observed, they tend to distract from AI's more immediate but comparatively mundane consequences, like mass plagiarism, displacement of human workers, and an enormous environmental footprint.
"This kind of talk is dangerous because it's become such a dominant part of the discourse," Sasha Luccioni, a researcher at the AI startup Hugging Face, told Bloomberg.
"Companies who are adding fuel to the fire are using this as a way to duck out of their responsibility," she added. "If we're talking about existential risks we're not looking at accountability."
Nobody sums up this kind of behavior better than OpenAI CEO Sam Altman, a self-admitted survivalist prepper who hasn't shut up about how he's afraid and conflicted about the AI he's building, and how it could cause mass human extinction or otherwise destroy the world — none of which has stopped his formerly non-profit company from taking billions of dollars from Microsoft, of course.
While Yudkowsky is surely guilty of doomsday prophesying, too, his criticisms at least seem well-intentioned.
More on AI: Bill Gates Says AI Could "Undermine Elections and Democracy"
Share This Article