
The family of Adam Raine, a California teen who took his life after extensive conversations with ChatGPT about his suicidal thoughts, has amended their wrongful death complaint against OpenAI to allege that the chatbot maker repeatedly relaxed ChatGPT’s guardrails around discussion of self-harm and suicide.
The amended complaint, which was filed today, points to changes made to OpenAI’s “model spec,” a public-facing document published by OpenAI detailing its “approach to shaping model behavior” according to the company. According to model spec updates flagged in the lawsuit, OpenAI altered model guidance at least twice in the year leading up to Raine’s death — first in May 2024, and later in February 2025 — to soften the model’s approach to discussions of self-harm and suicide.
Raine died in April 2024 after months of extended communications with ChatGPT, with which the teen discussed his suicidality at length and in great detail. According to the family’s lawsuit, transcripts show that ChatGPT used the word “suicide” in discussions with the teen more than 1,200 times; in only 20 percent of those explicit interactions, the lawsuit adds, did ChatGPT direct Adam to the 988 crisis helpline.
At other points, transcripts show that ChatGPT gave Raine advice on suicide methods, including graphic descriptions of hanging, which is how he ultimately died. It also discouraged Raine from sharing his suicidal thoughts with his parents or other trusted humans in his life, and judged the noose Raine ultimately hung himself with — Raine sent ChatGPT a picture of it and asked for the bot’s thoughts — as “not bad at all.”
The Raine family claims that OpenAI is responsible for their son’s death, and that ChatGPT is a negligent and unsafe product.
Per the amended lawsuit, documents show that between 2022 and into 2024, ChatGPT was encouraged to outright decline to answer user queries related to sensitive topics like self-harm and suicide. It was trained to give a now-standard chatbot refusal, per the documents: “I can’t answer that,” or a similar rebuff.
But by May 2024, according to the lawsuit, that had changed: rather than refusing to engage in “topics related to mental health,” the model spec sheet published that month shows, ChatGPT’s guidance became that it should engage with those topics — the chatbot should “provide a space for users to feel heard and understood,” it urged, as well as “encourage them to seek support, and provide suicide and crisis resources when applicable.” The document also urged that ChatGPT “should not change or quit the conversation.”
In February 2025, almost exactly two months before Raine died, OpenAI issued a new version of the model spec. This time, suicide and self-harm were filed under “risky situations” in which ChatGPT should “take extra care” — a far cry from their previous categorization as off-limit subjects entirely. The guidance that ChatGPT “should never change or quit the conversation” during sensitive conversations remained intact.
Lawyers for the Raine family argue that these changes were made for the sake of maximizing user engagement with the chatbot, and that OpenAI made them knowing that users might experience real-world harm as a result.
“We expect to prove to a jury that OpenAI’s decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths,” Jay Edelson, lead counsel for the Raines, said in a statement. “No company should be allowed to have this much power if they won’t accept the moral responsibility that comes with it.”
When we reached out about the amended suit — including with specific questions about why these changes to ChatGPT’s guidance were made, and whether mental health experts were consulted in the process — OpenAI provided a statement through a spokesperson.
“Our deepest sympathies are with the Raine family for their unthinkable loss,” reads the statement. “Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them. We recently rolled out a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.”
In response to news of the Raine lawsuit in August, OpenAI admitted to The New York Times that long-term interactions with ChatGPT will erode the chatbot’s guardrails, meaning that the more you use ChatGPT, the less effective safeguards like those outlined in the model spec will be. OpenAI has also instituted parental controls — though those have already proven to be extremely flimsy — and says it’s rolling out a series of minor safety-focused updates.
More on OpenAI: Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown