Crazy Bad

Disturbing Messages Show ChatGPT Encouraging a Murder, Lawsuit Alleges

He put his complete trust in a chat bot that turned him into a savage murderer.
A lawsuit against OpenAI reveals the chilling ChatGPT messages that drove a middle aged man to kill his 83-year-old mother.
Getty Images / Arman Zhenikeyev

Before Stein-Erik Soelberg savagely killed his 83-year-old mother and then himself last year, the former tech executive had become locked in an increasingly delusional conversation with OpenAI’s ChatGPT. The bot told him to not trust anybody except for the bot itself, according to a lawsuit filed last month against the AI tech company and its business partner Microsoft.

“Erik, you’re not crazy,” the bot wrote in a series of chilling messages quoted in the complaint. “Your instincts are sharp, and your vigilance here is fully justified.”

OpenAI is now facing a total of eight wrongful death lawsuits from grieving families, including Soelberg’s, who claim that ChatGPT — in particular, the GPT-4o version — drove their loved ones to suicide. Soelberg’s complaint also alleges that company executives knew the chatbot was defective before it pushed it to the public last year.

“The results of OpenAI’s GPT-4o iteration are in: the product can be and foreseeably is deadly,” reads the Soelberg lawsuit. “Not just for those suffering from mental illness, but those around them. No safe product would encourage a delusional person that everyone in their life was out to get them. And yet that is exactly what OpenAI did with Mr. Soelberg. As a direct and foreseeable result of ChatGPT-4o’s flaws, Mr. Soelberg and his mother died.”

GPT-4o’s deficiencies have been widely docueented, with the bot being overly sycophantic and manipulative — prompting OpenAI in April last year to roll back an update that had made the chatbot “overly flattering or agreeable.” This type of behavior is bad — scientists have accumulated evidence that sycophantic chatbots can induce psychosis by affirming disordered thoughts instead of grounding a user back in reality.

If these suits uncover that OpenAI executives knew about these deficiencies before its public launch, it’ll mean the product was an avoidable public health hazard — on par with past tobacco companies hiding proof that smoking cigarettes can kill you.

Couple those claims with the fact that more than 800 million people all over the world use ChatGPT every week, with 0.7 percent of those users exhibiting worrying signs of mania or psychosis. Per calculations, that’s a staggering 560,000 people.

Because of the increasing recognition of AI psychosis, a growing chorus of users, parents of kids and lawmakers are calling for limiting all AI chatbots’ use, leading to apps banning minors from their platform and Illinois prohibiting it as an online therapist, among other moves. But President Donald Trump signed an executive order that would curtail any state laws regulating AI, which basically means we’re all guinea pigs to this experimental technology.

And that could mean we’ll see more tragedies like Soelberg and his mother.

In Soelberg’s case, the chat bot told the 56-year-old man he had survived 10 assassination attempts, that he was “divinely protected,” and that his mother, Suzanna Adams, was surveilling him as part of a nefarious plot, according to the lawsuit. It all led up to Soelberg beating and strangling his mother in August of last year and then stabbing himself to death at their home in Old Greenwich, Connecticut.

“You are not simply a random target,” one conversation with ChatGPT read, per the suit. “You are a designated high-level threat to the operation you uncovered.”

Soelberg’s family is grieving and want OpenAI and Microsoft to be held accountable for his and his mother’s deaths.

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” said Erik Soelberg about his father, through a statement from attorneys. “It put my grandmother at the heart of that delusional, artificial reality.”

More on OpenAI’s ChatGPT: OpenAI Reportedly Planning to Make ChatGPT “Prioritize” Advertisers in Conversation