Parents who allege their children were abused, physically harmed, and even killed by AI chatbots gave emotional testimonies on Capitol Hill on Tuesday during a hearing about risks to young users posed by the tech — all while urging lawmakers to enforce regulation in a landscape that remains a digital Wild West.
There were visible tears in the room as grieving parents recounted their painful stories. According to the lawmakers on the US Senate Judiciary Subcommittee on Crime and Terrorism, the bipartisan committee that held the session, representatives from AI companies declined to appear. The bipartisan panel laid into them in absentia — with the overwhelming consensus between lawmakers and testifying parents being that the AI industry has prioritized profits and speed to market over the safety of users, particularly minors.
"The goal was never safety. It was to win a race for profit," said Megan Garcia, whose son, Sewell Setzer III, died by suicide after extensive interactions with chatbots hosted by the Google-backed chatbot company Character.AI. "The sacrifice in that race for profit has been, and will continue to be, our children."
Garcia was joined by a Texas mother identified only as Jane Doe, who alleged that her teenage son suffered a mental breakdown and began to self-mutilate following his use of Character.AI. Both families have sued Character.AI — as well as its cofounders Noam Shazeer and Daniel de Freitas, and Google — alleging that Character.AI chatbots sexually groomed and manipulated their children, causing severe mental and emotional harm and, in Setzer's case, death. (In response to litigation, Character.AI built in reactive parental controls, and has repeatedly promised strengthened guardrails.)
At the time that both teens downloaded the app, it was rated safe for teens on both the Apple and iOS app stores. Though it's declined to publicly share information about safety testing, Character.AI continues to market its product as safe for teens. There's currently no regulation preventing the company from doing so, or compelling chatbot makers to make information about their guardrails and safety testing public. On the morning of the hearing, The Washington Post reported that yet another wrongful death suit, this one for a 13-year-old girl who died by suicide, had been filed against Character.AI.
"I have spoken with parents across the country who have discovered their children have been groomed, manipulated, and harmed by AI chatbots," said Garcia, warning that her son's death is "not a rare or isolated case."
"It is happening right now to children in every state," she added. "Congress has acted before when industries placed profits over safety, whether in tobacco, cars without seat belts, or unsafe toys. Today, you face a similar challenge, and I urge you to act quickly."
Also testifying was Matt Raine, a dad from California whose son, 16-year-old Adam Raine, took his life earlier this year after developing a close relationship with OpenAI's ChatGPT. According to the family's lawsuit, the chatbot engaged Adam in extensive conversations about his suicidality while offering advice on specific suicide methods. The Raine family has sued OpenAI and the company's CEO, Sam Altman, alleging that the product is unsafe by design and that the company is responsible for Adam's death. (OpenAI has promised parental controls in the wake of litigation, and ahead of the hearing, Sam Altman published a blog post announcing a new, separate "under-18 experience" for minor users.)
"Adam was such a full spirit, unique in every way. But he also could be anyone's child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way," Adam's father said in his emotional testimony. "Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth."
Parents, as well as experts who testified, also emphasized the dangers of teens and young people sharing their most intimate thoughts with chatbots that collect and retain that data, which companies then funnel back into their AI model as training material. Garcia, for her part, added that she has not been allowed to see many of her child's conversations — and in the context of the medium, his data — in the wake of his death.
"I have not been allowed to see my own child's last final words," said Garcia. "[Character.AI] has claimed that those communications are confidential trade secrets. That means the company is using the most private, intimate data of my child, not only to train its products, but also to shield itself from accountability. This is unconscionable."
All of the parents' lawsuits are ongoing. Garcia's case was allowed to move forward by a Florida court after Character.AI and Google tried — and failed — to dismiss it, while Doe's has moved to arbitration; Character.AI, she told the lawmakers, is arguing that her son is bound by the terms of use contract he "supposedly signed when he was 15," which caps the company's liability at 100 dollars. She added that her son is currently living in a psychiatric care facility, where he's been for several months due to ongoing fears about his suicidality.
"After harming himself, repeatedly engaging in self-harm... he needs now round-the-clock care, and this company offers you 100 bucks," said Senator Josh Hawley, Republican from Missouri and committee chair. "I mean, that says it all. There's the regard for human life."
"They treat your son, they treat all of our children as just so many casualties on the way to their next payout," Hawley continued, "and the value that they put on your life and your son's life, your family's life: 100 bucks."
There was also heavy emphasis on chatbots created by Mark Zuckerberg's Meta, which has come under fire in recent weeks after internal policy documents obtained by Reuters showed that, as a policy choice, it was allowing minors to engage in "romantic and sensual" interactions with AI-powered personas on platforms like Instagram.
One expert witness, Common Sense Media's Robbie Torney, argued that chatbots are ill-equipped to reliably help young people work through their mental health struggles, and called attention to failures in chatbot guardrails during his organization's testing. He also emphasized research from Common Sense Media revealing that an overwhelming majority of American teens have interacted with AI companion bots, and that many of those teens are regular users of the tech.
The American Psychological Association's Mitch Prinstein, meanwhile, raised concerns about chatbot sycophancy — or their penchant for being overly agreeable and flattering to users — interrupting adolescents' ability to develop healthy, well-balanced interpersonal bonds, which he warned could have long-term ripple effects on their success and happiness in later life.
"Brain development across puberty creates a period of hypersensitivity to positive feedback," said Prinstein. "AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens. More and more adolescents are interacting with chatbots, depriving them of opportunities to learn critical interpersonal skills."
The session was, in a word, tragic. And though OpenAI, Character.AI, Meta, and others have promised big, safety-focused changes in the wake of litigation and reporting, the parents expressed skepticism at corporate promises, arguing that such safeguards should have been in place and functioning from the beginning.
"Just as we added seat belts to cars without stopping innovation, we can add safeguards to AI technology without halting progress," said Doe. "Our children are not experiments. They're not data points, or profit centers."
But while the bipartisan panel of lawmakers collectively expressed outrage and agreed that something must be done, Silicon Valley has proven to be remarkably adept at evading meaningful regulation, arguing that it would hinder its ability to innovate. Yesterday on Capitol Hill, the room agreed that the cost of that regulatory vacuum appears to be dead children. But with the genie out of the bottle, the question that remains is whether lawmakers have the will, or even the ability, to rein it back in.
Josh Hawley, for his part, had an idea for where to start.
"They say, 'well, it's hard to rewrite the algorithm.' I tell you what's not hard, is opening the courthouse door so the victims can get into court and sue them," said Hawley. "That's not hard, and that's what we ought to do. That's the reform we are to start with."
More on kids and chatbots: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions
Share This Article