One of the world's loudest artificial intelligence critics has issued a stark call to not only put a pause on AI but to militantly put an end to it — before it ends us instead.

In an op-ed for Time magazine, machine learning researcher Eliezer Yudkowsky, who has for more than two decades been warning about the dystopian future that will come when we achieve Artificial General Intelligence (AGI), is once again ringing the alarm bells.

Yudkowsky said that while he lauds the signatories of the Future of Life Institute's recent open letter — which include SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and onetime presidential candidate Andrew Yang — calling for a six-month pause on AI advancement to take stock, he himself didn't sign it because it doesn't go far enough.

"I refrained from signing because I think the letter is understating the seriousness of the situation," the ML researcher wrote, "and asking for too little to solve it."

As a longtime researcher into AGI, Yudkowsky says that he's less concerned about "human-competitive" AI than "what happens after."

"Key thresholds there may not be obvious," he wrote, "we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing."

Once criticized in Bloomberg for being an AI "doomer," Yudkowsky says he's not the only person "steeped in these issues" who believes that "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

He has the receipts to back it up, too, citing an expert survey in which a bunch of the respondents were deeply concerned about the "existential risks" posed by AI.

These risks aren't, Yudkowsky wrote in Time, just remote possibilities.

"It’s not that you can’t, in principle, survive creating something much smarter than you," he mused, "it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers."

There is, to Yudkowsky's mind, but one solution to the impending existential threat of a "hostile" superhuman AGI: "just shut it all down," by any means necessary.

"Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined)," he wrote. "Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries."

If anyone violates these future anti-AI sanctions, the ML researcher wrote, there should be hell to pay.

"If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated," he advised. "Be willing to destroy a rogue datacenter by airstrike."

Citing an exchange with his partner and mother of his child, Yudkowsky said that the couple is worried that their daughter Nina won't survive to adulthood if people keep building smarter and smarter AIs — and urged those who also express trepidation about it to adopt a similarly hard line because, if they don't, that "means their own kids are going to die too."

It's not difficult to see, with the "but what about the children" posturing, why Bloomberg's Ellen Huet called Yudkowsky a "doomer" after he got into it with OpenAI's Sam Altman on Twitter. t

Nevertheless, if someone who's veritably dedicated their life to studying the dangers of the dystopian AI future says we're getting close to the thing he's been warning about, his take may be worth a listen.

More on AI dystopia: Deranged New AI Has No Guardrails Whatsoever, Proudly Praises Hitler


Share This Article