Lobby This

Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI

"Clearly, Governor Newsom was under tremendous pressure from the Big Tech Lobby to veto this landmark legislation."
Maggie Harrison Dupré Avatar
Gavin Newsom vetoed a bill that would've required AI companies to prevent minors from engaging with dangerous AI content.
Illustration by Tag Hartman-Simkins / Futurism. Source: Mario Tama / Getty Images

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

California Governor Gavin Newsom vetoed a state bill on Monday that would’ve prevented AI companies from allowing minors to access chatbots, unless the companies could prove that their products’ guardrails could reliably prevent kids from engaging with inappropriate or dangerous content, including adult roleplay and conversations about self-harm.

The bill would have placed a new regulatory burden on companies, which currently adhere to effectively zero AI-specific federal safety standards. As it stands, there are no federal AI laws that compel AI companies to publicly disclose details of safety testing, including where it concerns minors’ use of their products; despite this regulatory gap — or perhaps because of it — many apps for popular chatbots, including OpenAI’s ChatGPT and Google’s Gemini, are rated safe for children 12 and over on the iOS store and safe for teens on Google Play.

Surveys, meanwhile, continue to show that AI chatbots are becoming a huge part of life for young people, with one recent report showing that over half of teens are regular users of AI companion platforms.

If implemented, the bill — Assembly Bill 1064 — would’ve been the first regulation of its kind in the nation.

As for his reasoning, Newsom argued that the bill stood to impose “such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.” So, in short, Newsom says that requiring that companies prove they have foolproof guardrails around inappropriate content for kids — including where it concerns sex and self-harm — goes too far, and that the possible benefits of kids using AI chatbots outweigh the possible harms.

Supporters of the bill are disappointed, with some advocates accusing Newsom of caving to Silicon Valley’s aggressive, deep-pocketed lobbying efforts. According to the Associated Press, the nonprofit Tech Oversight California found that tech companies and their allies spent around $2.5 million in just the first six months of the session trying to prevent Bill 1064 and related legislation from being signed into law.

“This legislation is desperately needed to protect children and teens from dangerous — and even deadly — AI companion chatbots,” said James Steyer, founder and CEO of the tech safety nonprofit Common Sense Media, in a statement. “Clearly, Governor Newsom was under tremendous pressure from the Big Tech Lobby to veto this landmark legislation.”

“It is genuinely sad that the big tech companies fought this legislation,” Steyer added, “which actually is in the best interest of their industry long-term.”

News of the veto decision came amid the passage of several other AI-specific regulatory actions in California, including SB 243, a law introduced by state senator Alex Padilla that requires AI companies to issue pop-ups reminding users that they aren’t human during periods of extended use; mandates that AI companion platforms create “protocols” around identifying and preventing against conversations about self-harm and suicidal ideation; and mandates that companies instill “reasonable measures” to prevent chatbots from engaging in “sexually explicit conduct” with minors.

The news of the mixed regulatory action in California comes following a slew of high-profile child welfare and product liability lawsuits brought against chatbot companies. Several of the cases involve the AI companion platform Character.AI, which is extremely popular with kids, with families across the country arguing that the platform and its many thousands of AI chatbots sexually and emotionally abused their minor children, resulting in mental anguish, physical self-harm, and in multiple cases, suicide. The most prominent lawsuit of the bunch centers on a 14-year-old Florida teen named Sewell Setzer III, who took his life in February 2024 following extensive, romantically and sexually intimate conversations with multiple Character.AI chatbots.

OpenAI is also facing a grim lawsuit regarding the death by suicide of a 16-year-old in California named Adam Raine, who carried out extensive, harrowingly explicit conversations with ChatGPT about suicidal ideation. The lawsuit alleges that ChatGPT’s safety guardrails directed Raine, who talked openly about suicidal ideation with the chatbot, to safety resources like the 988 crisis hotline only around 20 percent of the time; elsewhere, it gave Raine specific instructions about suicide methods, and at times discouraged him from speaking to his friends and family about his dark thoughts.

More on AI and teens: AI Chatbots Are Leaving a Trail of Dead Teens

Maggie Harrison Dupré Avatar

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.