Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

Parents of children who died by suicide following extensive interactions with AI chatbots are testifying this week in a Senate hearing about the possible risks of AI chatbot use, particularly for minors.

The hearing, titled "Examining the Harm of AI Chatbots," will be held this Tuesday by the US Senate Judiciary Subcommittee on Crime and Terrorism, a bipartisan delegation helmed by Republican Josh Hawley of Arkansas. It'll be live-streamed on the judiciary committee's website.

The parents slated to testify include Megan Garcia, a Florida mother who in 2024 sued the Google-tied startup Character.AI — as well as the company's cofounders, Noam Shazeer and Daniel de Freitas, and Google itself – over the suicide of her 14-year-old son, Sewell Setzer III, who took his life after developing an intensely intimate relationship with a Character.AI chatbot with which he was romantically and sexually involved. Garcia alleges that the platform emotionally and sexually abused her teenage son, who consequently experienced a mental breakdown and an eventual break from reality that caused him to take his own life.

Also scheduled to speak to Senators are Matt and Maria Raine, California parents who in August filed a lawsuit against ChatGPT maker OpenAI following the suicide of their 16-year-old son, Adam Raine. According to the family's lawsuit, Adam engaged in extensive, explicit conversations about his suicidality with ChatGPT, which offered unfiltered advice on specific suicide methods and encouraged the teen — who had expressed a desire to share his dark feelings with his parents — to continue to hide his suicidality from loved ones.

Both lawsuits are ongoing, and the companies have pushed back against the allegations. Google and Character.AI attempted to have Garcia's case dismissed, but the presiding judge shot down their dismissal motion.

In response to litigation, both companies have moved — or at least made big promises — to strengthen protections for minor users and users in crisis, efforts that have included installing new guardrails directing at-risk users to real-world mental health resources and implementing parental controls.

Character.AI, however, has repeatedly declined to provide us with information about its safety testing following our extensive reporting on easy-to-find gaps in the platform's content moderation.

Regardless of promised safety improvements, the legal battles have raised significant questions about minors and AI safety at a time when AI chatbots are increasingly ubiquitous in young people's lives, despite a glaring lack of regulation designed to moderate chatbot platforms or ensure enforceable, industry-wide safety standards.

In July, an alarming report from the nonprofit advocacy group Common Sense Media found that over half of American teens engaged regularly with AI companions, including chatbot personas hosted by Character.AI. The report, which surveyed a cohort of American teens aged 13 to 17, was nuanced, showing that while some teens seemed to be forming healthy boundaries around the tech, others reported feeling that their human relationships were less satisfying than their connections to their digital companions. The main takeaway, though, was that AI companions are already deeply intertwined with youth culture, and kids are definitely using them.

"The most striking finding for me was just how mainstream AI companions have already become among many teens," Dr. Michael Robb, Common Sense's head of research, told Futurism at the time of the report's release. "And over half of them say that they use it multiple times a month, which is what I would qualify as regular usage. So just that alone was kind of eye-popping to me."

General-use chatbots like ChatGPT, meanwhile, are also growing in popularity among teens, while chatbots continue to be embedded into popular youth social media platforms like Snapchat and Meta's Instagram. And speaking of Meta, the big tech behemoth recently came under fire after Reuters obtained an official Meta policy document that said it was appropriate for children to engage in "conversations that are romantic or sensual" with its easily-accessible chatbots. The document even outlined multiple company-accepted interactions for its chatbots to engage in — which, yes, included sensual conversations about children's bodies and romantic dialogues between minor-aged human users and characters based on adults.

The hearing also comes days after the Federal Trade Commission (FTC) announced a probe into seven major tech companies over concerns about AI and minor safety, including Character.AI, Google owner Alphabet, OpenAI, xAI, Snap, Instagram, and Meta.

"The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions," reads the FTC's announcement of the inquiry, "to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products."

More on AI and child safety: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions


Share This Article