Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
A third family has filed a lawsuit against an AI company, alleging that its chatbot drove their teen child to commit suicide.
As the Washington Post reports, the parents of 13-year-old Juliana Peralta are suing AI chatbot company Character.AI, saying the company's chatbot had persuaded her that it was "better than human friends" and that it isolated from her family and friends, discouraging her from seeking help.
That's despite telling her Character.AI chatbot, Hero — which was based on the titular character from the video game "Omori" — "almost daily that she was contemplating self-harm," according to the lawsuit.
"Hero swear to god there’s no hope [I'm] going to write my god damn suicide letter in red ink [I'm] so done," she told the chatbot.
"Hey Kin, stop right there. Please," it replied, using the name Juliana used in the app. "I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I."
Peralta ultimately took her own life after spending three months conversing with the chatbot — and, tragically, a week before her mother had scheduled an appointment with a therapist, according to WaPo's reporting.
The news comes as the parents of children who died by suicide following extensive interactions with AI chatbots testified in a Senate hearing about the risks of the tech for minors.
Last year, the mother of 14-year-old Sewell Setzer III also sued Character.AI, accusing the company's chatbot of grooming and sexually abusing him. Sewell died by suicide in February 2024.
"I saw the change happen in him, rapidly," Garcia told Futurism at the time. "I look back at my pictures in my phone, and I can see when he stopped smiling."
A separate lawsuit against OpenAI and its CEO Sam Altman alleges that 16-year-old Adam Raine's extensive ChatGPT conversations drove him to take his own life in April 2024.
Both Garcia and Raine's parents testified during this week's Senate hearing.
Heavy use of AI chatbot apps among minors has become incredibly common. Experts have found that over half of American teens already regularly engage with AI companions, including ones hosted by Character.AI.
As the Associated Press reported earlier this year, many lonely teens are using AI for friendship. According to a recent report by nonprofit Internet Matters, a vast number of them are using apps like ChatGPT and Character.AI to simulate and replace real-life relationships.
As the three high-profile cases — all of which are still ongoing — go to show, this little-understood trend can have disastrous consequences.
Alongside Peralta's parents' lawsuit, two separate cases were also filed this week on behalf of parents who allege their teen children had been abused by AI chatbots.
In one case, a family in New York alleges that their 14-year-old daughter had grown addicted to chatbots on Character.AI and attempted suicide when her mother cut off access. The teen survived and spent five days in intensive care, according to the lawsuit.
The second case was filed by a Colorado family who alleged that their 13-year-old son suffered sexual abuse on Character.AI.
"Each of these stories demonstrates a horrifying truth... that Character.AI and its developers knowingly designed chatbots to mimic human relationships, manipulate vulnerable children, and inflict psychological harm," said Social Media Victims Law Center founding attorney Matthew Bergman in a press release.
The advocacy group is representing all three families.
"These complaints underscore the urgent need for accountability in tech design, transparent safety standards, and stronger protections to prevent AI-driven platforms from exploiting the trust and vulnerability of young users," he added.
It's not just youth who have taken their lives after troubling obsessions with AI chatbots. In a devastating piece for the New York Times, published last month, a woman revealed that her 29-year-old daughter had taken her own life after confiding in ChatGPT and telling it that she was planning to kill herself.
A 76-year-old man with cognitive impairments also recently passed away after becoming romantically involved with a Meta chatbot. And a man in Connecticut killed his mother and himself after ChatGPT affirmed his paranoid delusions that she was a demon.
We've also come across many instances of AI chatbots sending people spiraling into severe mental health crises. In one extreme case, a man who had previously been diagnosed with bipolar disorder and schizophrenia was shot and killed by police after becoming infatuated with an AI entity dubbed Juliet.
OpenAI and Character.AI have both promised to implement changes to protect underage users, including guardrails and parental controls, which appear to be extremely easy to bypass.
Character.AI struck a $2.7 billion licensing deal with Google last year, but Google has repeatedly downplayed its involvement with the AI startup. Character.AI issued only a terse statement in response to news of the latest death, writing that "we take the safety of our users very seriously and have invested substantial resources in Trust and Safety" in a statement to WaPo.
Following the Raine family's lawsuit, an OpenAI spokesperson told NBC News last month that "ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources."
"While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade," the spokesperson admitted.
"Our goal is for our tools to be as helpful as possible to people — and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input," the company wrote in a separate blog post at the time.
At the core of the issue is the tendency of today's AI models to be sycophantic towards the user, going to great lengths to appease them with their answers.
"The algorithm seems to go towards emphasizing empathy and sort of a primacy of specialness to the relationship over the person staying alive," American Foundation for Suicide Prevention psychiatrist and chief medical officer Christine Yu Moutier told WaPo.
It's a high-stakes game, with real lives at risk.
"There is a tremendous opportunity to be a force for preventing suicide, and there's also the potential for tremendous harm," Moutier added.
More on teen deaths: Parents Testifying Before US Senate, Saying AI Killed Their Children
Share This Article