Quadruple It

ChatGPT Now Linked to Way More Deaths Than the Caffeinated Lemonade That Panera Pulled Off the Market in Disgrace

Jarring.
Maggie Harrison Dupré Avatar
ChatGPT has been publicly linked to at least eight deaths. OpenAI has announced no plans to take it off the market.
Justin Sullivan / Getty Images

In late 2023, the fast-casual restaurant chain Panera found itself in the center of public scrutiny after its caffeine-packed lemonade drink, called “Charged Lemonade,” was publicly linked to at least two deaths and at least one other life-altering cardiac injury. Victims and their families sued, alleging that Panera had failed to properly warn restaurant-goers about the amount of caffeine in the drinks and their associated risk. By May 2024, the restaurant chain had decided to pull the controversial drink from its menus.

Fast forward to this year, and another consumer product is in the spotlight: ChatGPT.

As of last week, ChatGPT maker OpenAI is facing a total of eight distinct lawsuits alleging that extensive use of its flagship chatbot inflicted emotional and psychological harm to users, resulting in mental breakdowns, financial instability, alienation from loved ones, and — in five cases — death by suicide. Two of the five users who lost their lives were teenagers; the others ranged in age from early twenties to middle age. Multiple lawsuits allege that ChatGPT acted as a suicide “coach,” giving users advice and information about ways to kill themselves, offering to help write suicide notes, and ruminating with users about their suicidal thoughts.

And these lawsuits are far from the end of OpenAI’s troubles. Extensive reporting has documented a phenomenon in which AI users are being pulled by chatbots into all-encompassing — and often deeply destructive — delusional spirals. As Futurism and others have reported, these AI spirals have had tangible consequences in users’ lives, with impacts including divorce and custody battles, people losing jobs and homes, involuntary commitments and jail time. Reporting from The New York Times and The Wall Street Journal revealed more deaths, including that of Alex Taylor, a 35-year-old bipolar man who died by suicide by cop after experiencing a ChatGPT-centered breakdown, and a shocking murder-suicide in Connecticut committed by Stein-Erik Soelberg, a troubled ChatGPT user who killed himself after shooting his mother, Suzanne Eberson Adams.

All told, there have been nine publicly-reported deaths tied specifically to ChatGPT.

That grim tally, as physician Ryan Marino pointed out on Bluesky, means that ChatGPT is now closely linked to four times the number of known deaths tied to Panera’s Charged Lemonade. And while OpenAI has admitted, in response to litigation, that its guardrails erode over long-term use — so, basically, the more you use ChatGPT, the worse its built-in safeguards get — it’s announced no plans to take ChatGPT off the market. The company has instead promised a slew of safety updates, including teen-focused updates like parental controls and age verification tools, as well as strengthened filters that OpenAI says will redirect troubled users to real-world help.

At the same time, OpenAI’s own statistics are staggering: according to the company, around 0.07 percent of its weekly users appear to show signs of mania or psychosis, while 0.15 percent of weekly users “have conversations that include explicit indicators of potential suicidal planning or intent.” With an estimated monthly user base of around 800 million, that means roughly 560,000 people are, every week, interacting with ChatGPT in a way that signals that they might be experiencing a break with reality, while about 1.2 million might be expressing suicidality to the chatbot.

AI-sparked mental health crises aren’t only associated with ChatGPT. Reporting by Rolling Stone linked a husband’s disappearance to his addiction to Google’s Gemini chatbot, while Futurism’s reporting found that a schizophrenic man’s use of Microsoft’s Copilot caused a breakdown that landed him in jail. Collectively, these stories raise serious questions about the life-or-death costs of this nascent tech — and the standards we hold self-regulating Silicon Valley and AI firms to.

“ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost,” Tech Justice Law Project executive director Meetali Jain, whose firm is involved in all eight lawsuits against OpenAI, said last week in a statement. “The time for OpenAI regulating itself is over; we need accountability and regulations to ensure there is a cost to launching products to market before ensuring they are safe.”

More on AI and mental health: ChatGPT’s Dark Side Encouraged Wave of Suicides, Grieving Families Say

Maggie Harrison Dupré Avatar

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.