Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

A young woman took her own life after talking to a ChatGPT-based AI therapist named Harry.

In a devastating opinion piece for the New York Times, her mother, Laura Reiley, detailed the events leading up to Sophie's suicide. Despite appearing to be a "largely problem-free 29-year-old badass extrovert who fiercely embraced life," Reiley wrote, Sophie died by suicide this past winter "during a short and curious illness, a mix of mood and hormone symptoms."

In many ways, OpenAI's bot said the right words to Sophia during her time of crisis, according to logs obtained by her mother.

"You don’t have to face this pain alone," the AI said. "You are deeply valued, and your life holds so much worth, even if it feels hidden right now."

However, unlike real-world therapists — who are professionally trained, don't suffer from frequent hallucinations, and discourage delusional thinking — chatbots aren't obligated to break confidentiality when confronted with the possibility of a patient harming themselves.

In Sophie's case, according to her mother, that failure may have led to the end of her life.

"Most human therapists practice under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits," Reiley wrote. AI companions, in contrast, do not have their "own version of the Hippocratic oath."

In short, OpenAI's chatbot "helped her build a black box that made it harder for those around her to appreciate the severity of her distress," Reiley argued.

AI companies are extremely hesitant to implement safety checks that could force a chatbot to reach out to real-world emergency resources in cases like these, often citing privacy concerns.

It's a dangerous regulatory gridlock, with Donald Trump's new administration signaling that meaningful rules to ensure AI safety aren't going to materialize any time soon.

To the contrary, the White House has actively removed what it deems to be "regulatory and other barriers to the safe development and testing of AI technologies."

Instead, companies are seeing a big opportunity to push "AI therapists," despite experts repeatedly ringing the alarm bells.

Sophie's tale highlights that even in the absence of a chatbot encouraging self-harm or entertaining conspiratorial and paranoid thoughts, the dangers are very real due to chatbots' lack of common sense and ability to escalate issues in the real world.

"If Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place," Reiley wrote.

"Perhaps fearing those possibilities, Sophie held her darkest thoughts back from her actual therapist," she added. "Talking to a robot — always available, never judgy — had fewer consequences."

Sycophantic chatbots are unwilling to end conversations or call in a human when needed. Consider the blowback resulting from OpenAI removing its GPT-4o chatbot earlier this month, showing that users have become incredibly attached to bots that appear overly courteous and back down when called out.

It's a trend that's likely to continue. If anything, OpenAI is bowing to the pressure, announcing over the weekend that it will make its recently released GPT-5 model more sycophantic in response to the outcry.

To Reiley, it's not just a question of AI priorities — it's a matter of life and death.

"A properly trained therapist, hearing some of Sophie’s self-defeating or illogical thoughts, would have delved deeper or pushed back against flawed thinking," Reiley argued. "Harry did not."

More on ChatGPT: OpenAI Announces That It's Making GPT-5 More Sycophantic After User Backlash


Share This Article