An anti-AI activist in California has been missing for about two weeks, according to The Atlantic, and now his friends are scared for his safety while San Francisco police fear he could target OpenAI employees.
The activist in question, a 27 year old named Sam Kirchner, helped start the Stop AI group last year with a commitment to non-violent protest, but became frustrated and angry that the group’s efforts didn’t go quickly or far enough as he increasingly saw AI as a looming existential threat to humanity, according to the magazine’s reporting.
That eventually led to Kirchner splitting from the group and going off grid after assaulting the current Stop AI leader; city police then received calls that “warned that Kirchner had specifically threatened to buy high-powered weapons and to kill people at OpenAI,” The Atlantic reports.
Kirchner’s situation is clearly extreme, but perhaps not entirely surprising as AI rocks society, prompting doom-and-gloom narratives, even among its most ardent boosters like OpenAI CEO Sam Altman.
“There is this kind of an apocalyptic mindset that people can get into,” Émile P. Torre — a philosopher, historian and acquaintance of Kirchner — told The Atlantic. “The stakes are enormous and literally couldn’t be higher. That sort of rhetoric is everywhere in Silicon Valley.”
Kirchner’s descent seems to have started last month when he had a confrontation with members of Stop AI, which is one of several AI skeptic organizations or loose confederations online that have sprung up in recent years. Stop AI’s mission is to push for a “permanent global ban on the development of artificial superintelligence,” The Atlantic reports.
Kirchner disagreed with Stop AI messaging for a protest and then beat up Matthew “Yakko” Hall, the current leader of the group, after Kirchner tried to access funds from Stop AI’s coffers. Members later found his West Oakland apartment empty and learned that the city police issued an internal alert about Kirchner being armed and dangerous and a possible threat to employees at OpenAI; its offices locked down last month as a result.
Stop AI members don’t think he’s a public danger and are more scared about his mental and physical health. Most likely, he’s hunkering down feeling hurt and embarrassed, they told the magazine. Nonetheless, some of his final words to Yakko were that the “nonviolence ship has sailed for me.”
“He had the weight of the world on his shoulders,” Stop AI organizer Wynd Kaufmyn told The Atlantic.
A more extreme example of an AI skeptic group is the Zizians, a cult that became fearful AI could end humanity and which has been implicated in several murders, though those cases have nothing to do with AI. Another is Pause AI, which “advocates for a pause in superintelligent-AI development until it can proceed safely, or in ‘alignment’ with democratically decided ideal outcomes.”
It’s tough to say how much AI really is poised to change society. It’s possible that its capabilities could soon hit a wall, or that it will continue to grow stronger until it threatens the economic and social order, or even that superintelligent AI could pose a threat to humankind. That final possibility isn’t an entirely fringe position; the public intellectual Eliezer Yudkowsky, for instance, recently published a book titled “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us,” which became an immediate bestseller in September.
Besides the AI safety activists and extremists, even the leaders of tech companies that are pushing the latest AI can sound like doomers, like Anthropic CEO Dario Amodei and OpenAI CEO Sam Altman. Sure, it could all be marketing hype, but people take these predictions seriously — which has inadvertently made these companies and leaders targets in the eyes of disgruntled masses who feel left out, their lives dictated by rich people who aren’t accountable for their actions.
“I have been worried about people in the AI-safety crowd resorting to violence,” Torres said, reflecting on Kirchner’s story. “Someone can have that mindset and commit themselves to nonviolence, but the mindset does incline people toward thinking, Well, maybe any measure might be justifiable.”
More on AI: Rockstar Cofounder Says AI Is Like When Factory Farms Did Cannibalism and Caused Mad Cow Disease