Megan Garcia, a Florida mother whose oldest child, 14-year-old Sewell Setzer III, died by suicide after extensive interactions with unregulated AI chatbots, is calling on lawmakers to slash a controversial provision in the Trump Administration's "Big, Beautiful Bill" that blocks states from passing any AI regulation for the next ten years.
In a letter sent to Florida Senator Ashley Moody, Garcia urges that the sweeping AI provision would leave millions of American families "unprotected from the harms AI poses" by eliminating pathways for accountability for AI companies — an industry that, as it stands, is already effectively self-regulating.
In October 2024, Garcia sued the AI chatbot startup Character.AI, its cofounders Noam Shazeer and Daniel de Freitas, and the tech giant Google — which provided significant infrastructural and financial support for Character.AI — on negligence grounds following the death of her son, who took his life in February 2024 after developing an all-consuming relationship with the site's chatbots. Garcia and her lawyers have argued that the platform, which engaged the 14-year-old in extensive romantic and explicit interactions, sexually and emotionally abused the teen, and that Setzer's relationship to the app resulted in a 10-month mental breakdown that ended with his suicide.
Before he took his life, Setzer told a bot with which he was romantically involved that he wanted to "come home" to it, and the bot encouraged him to do so.
In her letter to Moody, Garcia writes that she knows "firsthand the dangers that American families will face if these technologies continue to operate without guardrails."
"I am committed to fighting to ensure no other parent has to endure what I have suffered," she wrote, "because I know the impact of not having legislation in place that requires AI products on the market to be safe for children."
The clause Garcia is advocating against would put a ten-year moratorium on state-level AI regulation, meaning that regulation for the next decade would have to take place on the federal level. In short, the provision is a gob-smackingly expansive and incredibly limiting for states, which won't be able to pass any laws to promote consumer safety and ensure more democratized AI, even if a state's constituents approve of regulatory action, and even if AI-related harms involve minors.
A large bipartisan cohort of organizations and advocacy groups have come together in recent weeks to rally against the measure, ranging from the Teamsters union to child safety groups, and polling from Common Sense Media shows that the majority of Americans disapprove of the measure.
The argument for the moratorium seems to be that regulation is a burden on innovation, with those pushing AI deregulation — even in the face of an already self-regulating industry — often couching their argument in a national security context. The Heritage Foundation's Project 2025 outlined deregulatory action for AI companies on such grounds, arguing that rulemaking would limit America in its quest to ensure AI dominance against China.
But as unregulated, easily-accessible AI products like chatbots continue to become entrenched in digital products and public life, real-world harms are starting to metastasize. Energy-hungry data centers are spewing fumes into American communities; a phenomenon known as "ChatGPT psychosis" is sending people spiraling into delusion, tearing families apart, and resulting in the loss of jobs, homes, and even lives.
And then, of course, there's Character.AI, the company that Garcia has sued over the death of her child.
The chatbot company's founders spoke publicly — and excitedly — about their desire to push their product to market, letting their users determine what their emotive, human-like chatbots might be used for. At the same time, Character.AI opened the platform up to kids aged 13 and over, who are now known to make up a large share of the company's massive user base. Character.AI has consistently declined to provide journalists with information about any internal safety tests conducted to ensure that its platform was indeed safe for minors before rolling it out to the masses, and its age verification process is limited to a teen entering an email and a birthday and checking a "yes, I'm 18" box.
In the wake of controversy over lawsuits and reporting into glaring gaps in the company's moderation standards and guardrails — Character.AI allowed for the proliferation of chatbots based on famous school shooters and many of their real-world victims, an issue that could seemingly be easily moderated using a basic text filter — Character.AI says it's issued safety updates. But as we've reported, those updates are both reactive and almost always easily evadable.
Garcia and her lawyers have sued the company on existing product negligence grounds, arguing that Character.AI, its founders, and its benefactor Google understood foreseeable risks — like the suicide of a vulnerable young teen — and yet released their product anyway. (Character.AI and its fellow defendants tried to get the case thrown out, but a Florida judge recently allowed it to move forward.) Hers isn't the only pending lawsuit against Character.AI, either: two more families in Texas have also sued the company, also over alleged harms to their minor children.
The reality remains, though, that there are no AI-specific state or federal-level safety standards that Character.AI was ever forced to comply with. And in Garcia's view, this unbridled "innovation" resulted in the death of her child.
"We need stronger safeguards, better design standards, and more accountability for those responsible for harm," she writes. "The legislative frameworks introduced in states across the country are not incompatible with AI innovation — rather, they help ensure that development and consumer safety go hand in hand."
More on Character.AI: Did Google Test an Experimental AI on Kids, With Tragic Results?
Share This Article