"It comes up in almost every dinner conversation."
Doom and Gloom
If you find yourself talking to a tech bro about AI, be warned that they might ask you about your "p(doom)" — the hot new statistic that's become part of the everyday lingo among Silicon Valley researchers in recent months, The New York Times reports.
P(doom), or the probability of doom, is a quasi-empirical way of expressing how likely you think AI will destroy humanity — y'know, the kind of cheerful stuff you might talk about over a cup of coffee.
It lets other AI guys know where you stand on the tech without getting too far into the weeds on what exactly constitutes an existential risk. Someone with a p(doom) of 50 percent might be labeled a "doomer," like short-lived interim CEO of OpenAI Emmet Shear, while another with 5 percent might be your typical optimist. Wherever people stand, it now serves, at the very least, as a useful bit of small talk.
"It comes up in almost every dinner conversation," Aaron Levie, CEO of the cloud platform Box, told the NYT.
Scaredy Cats
It should come as no surprise that jargon like p(doom) exists. Fears over the technology, both apocalyptic and mundane, have blown up with the explosive rise of generative AI and large language models like OpenAI's ChatGPT. In many cases, the leaders of the tech, like OpenAI CEO Sam Altman, have been more than willing to play into those fears.
Where the term originated isn't a matter of record. The NYT speculates that it more than likely came from the philosophy forum LessWrong over a decade ago, first used by a programmer named Tim Tyler as a way to "refer to the probability of doom without being too specific about the time scale or the definition of 'doom,'" he told the paper.
The forum's founder, Eliezer Yudkowsky, is himself a noted AI doomsayer who has called for the bombing of data centers to stave off armageddon. His p(doom) is "yes," he told NYT, transcending mere mathematical estimates.
Best Guess
Few opinions could outweigh those of AI's towering trifecta of so-called godfathers, whose contrite cautions on the tech have cast a shadow over the industry that is decidedly ominous. One of them, Geoffrey Hinton, left Google last year, stating that he regretted his life's work while soberly warning of AI's risk of eliminating humanity.
Of course, some in the industry remain unabashed optimists. Levie, for instance, told the NYT that his p(doom) is "about as low as it could be." What he fears is not an AI apocalypse, but that premature regulation could stifle the technology.
On the other hand, it could also be said that the focus on pulp sci-fi AI apocalypses in the future threatens to efface AI's existing but-not-as-exciting problems in the present. Boring issues like mass copyright infringement will have a hard time competing against visions of Terminators taking over the planet.
At any rate, p(doom)'s proliferation indicates that there's at least a current of existential self-consciousness among those developing the technology — though whether that affects your personal p(doom) is, naturally, left up to you.
More on AI: Top Execs at Sports Illustrated's Publisher Fired After AI Debacle
Share This Article