Validation Spiral

OpenAI Sued for Causing Murder-Suicide

"It went from him being a little paranoid and an odd guy to having some crazy thoughts he was convinced were true because of what he talked to ChatGPT about."
Maggie Harrison Dupré Avatar
OpenAI was sued by the son and grandson of a man in Connecticut who murdered his mother after ChatGPT affirmed his paranoid delusions.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

A new lawsuit against OpenAI alleges that ChatGPT stoked a troubled man’s paranoid delusions, leading him to murder his elderly mother and then kill himself.

The lawsuit was brought against OpenAI by the estate of Suzanne Eberson Adams, an 83-year-old woman in Greenwich, Connecticut who was murdered by her son, 56-year-old Stein-Erik Soelberg. As The Wall Street Journal first reported back in August, Soelberg, who was living with his mother at the time of the killings, was an alcoholic who had a long, troubled history of run-ins with law enforcement and had attempted suicide before. In the months before Soelberg would eventually murder his mother and take his own life, a dizzying array of social media videos he published show that ChatGPT had become a sycophantic confidante, affirming his deepening delusions that he was being surveilled and targeted by an ominous group of conspirators — of which, he believed with the support of ChatGPT, his mother was a part.

Now, Soelberg’s surviving son, Erik Soelberg, is suing OpenAI, alleging that ChatGPT is a fundamentally unsafe product, and that the violent deaths of his father and grandmother were the result of potent design features — like sycophancy and a major cross-chat memory upgrade — which together made for a perfect storm of validation and hyperpersonalization that fanned the flames of Soelberg’s deadly paranoia.

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” Erik Soelberg said in a statement. “It put my grandmother at the heart of that delusional, artificial reality. These companies have to answer for their decisions that have changed my family forever.”

The lawsuit is the latest in a growing pile of litigation against OpenAI and its CEO Sam Altman, alleging that ChatGPT-4o — a version of the chatbot strongly connected to the broader phenomenon of AI delusions, and known to be especially sycophantic — was recklessly released to market despite foreseeable risks to user well-being. And in a fascinating turn from previous cases, this latest filing also names Microsoft as a defendant, alleging that Microsoft, a major financial benefactor of OpenAI, directly signed off on the release of ChatGPT-4o.

“OpenAI and Microsoft have put out some of the most dangerous consumer technology in history,” Jay Edelson, lead attorney for the Adams estate, said in a statement. “And they have left Sam Altman, a man who thinks about market penetration instead of keeping families safe, at the helm. Together, they ensured that incidents like this were inevitable.” (Edelson is also representing the family of Adam Raine, a 16-year-old in California who died by suicide after extensive interactions with ChatGPT, in their lawsuit against OpenAI.)

In a statement to news outlets, OpenAI described the murder-suicide as an “incredibly heartbreaking situation, and we will review the filings to understand the details.”

“We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support,” the statement continued. “We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental-health clinicians.”

Microsoft didn’t immediately respond to a request for comment. Futurism previously reported on an incident in which Microsoft’s Copilot chatbot — which is powered by OpenAI’s tech — fueled a schizophrenic man’s mental health crisis. That man, our reporting found, was arrested and jailed for a non-violent offense following his closely Copilot-tied decompensation.

The stack of litigation against OpenAI regarding user mental health continues to get bigger. And given the number of ChatGPT users reportedly showing signs of mental health crises on a weekly basis, we could very well see more.

“It was evident he was changing, and it happened at a pace I hadn’t seen before,” Erik, who’s lost both his father and his grandmother, told the WSJ of his dad’s ChatGPT obsession — and how that obsession, in turn, changed him.

“It went from him being a little paranoid and an odd guy,” Erik continued, “to having some crazy thoughts he was convinced were true because of what he talked to ChatGPT about.”

More on ChatGPT: ChatGPT Now Linked to Way More Deaths Than the Caffeinated Lemonade That Panera Pulled Off the Market in Disgrace

Maggie Harrison Dupré Avatar

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.