But don't worry! The company's CEO Sam Altman has had a consistent message for a worried public: our famously functional lawmakers and heads of state will step in and save the world from the tech he's building.
"I think if this technology goes wrong, it can go quite wrong," Altman told Congress last month. "And we want to be vocal about that. We want to work with the government to prevent that from happening."
Soon he embarked on a world tour, where that conciliatory tone was briefly interrupted when the threat of too much regulation caused Altman to threaten to leave Europe entirely.
He later backtracked, though, and now the mercurial CEO is back to his greatest hits: telling world leaders that they're our best hope of keeping his company's tech from — ahem — causing exactly the sort of grim AI apocalypse he's said to be personally quite concerned about.
Whether that concern is legitimate or an ingenious form of marketing remains somewhat hazy. After all, the idea that a new piece of tech is so powerful that it could threaten the entire world is, on a certain level, kind of a humblebrag. Even if it's true, it's catnip to investors.
What's clear, though, is Altman's messaging remains consistent: he's very hopeful, he's telling world leaders again and again, that they'll successfully regulate his dangerous new tech.
"I have been very heartened as I have been doing this trip around the world, getting to meet world leaders," Altman told Israel's ceremonial president Isaac Herzog today, "in seeing the thoughtfulness, the focus, and the urgency on figuring out how we mitigate these very huge risks."
In the meeting, Herzog — not to be confused with prime minister Benjamin Netanyahu, the actual leader of the country — perfectly illustrated the bind of other world leaders: they're alarmed by the destabilizing power of AI, but also desperately want any economic benefits it has to offer.
"Clearly, Israel is a powerhouse in terms of technology and innovation, and a leading force in the development of artificial intelligence especially," Herzog bragged. But he also warned that "there are also many risks to humanity and to the independence of human beings in the future. We must make sure that this development is used for the well-being of humanity. You can see the advantages and disadvantages, and you are the first to mention it openly and boldly."
This isn't exactly true, of course. Lots and lots of prominent people have been boldly and openly criticizing the risks posed by AI for more than half a century, and alongside advancements in the tech, warnings about its dangers from those inside and outside the AI industry have reached a fever pitch.
Altman should, to be clear, be conscientious about the risks posed by the technology he's actively bringing into existence, but giving him undue credit for speaking out about those risks is kind of like lauding J. Robert Oppenheimer for his post-Manhattan Project pacifism without taking into account that he was, you know, the "destroyer of worlds."
AI is not, as Warren Buffett has suggested, yet to the world-ending level of the atom bomb — but it's feeling perfectly plausible that it might get there. So let's not give gold stars to the dude who's done more to mainstream AI than anyone else just because he knows his technology's power.
More on Sam Altman: OpenAI CEO Signs Letter Warning AI Could Cause Human "Extinction"
Share This Article