A Compelling Governmental Interest

New Law Would Prevent Minors From Using AI Chatbots

"We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology."
Maggie Harrison Dupré Avatar
A bipartisan bill wants to keep kids under 18 away from AI companion bots, seeking penalties for AI companies that don't comply.
Getty / Futurism

A proposed bipartisan bill would seek to bar minors from interacting with AI chatbots, marking a forceful attempt to subject AI companies to federal regulation over concerns about minors and AI safety.

Titled the GUARD Act, the bill was brought on Tuesday by senators Josh Hawley of Missouri (R) and Richard Blumenthal of Connecticut (D), and comes weeks after an emotional hearing on Capitol Hill featuring testimonies from parents of children under 18 who were hurt or killed after engaging in extensive interactions with unregulated AI chatbots. It also comes amid an ever-growing pile of child welfare and product negligence lawsuits brought against AI companies, as well as urgent warnings from mental health and tech safety experts.

“More than seventy percent of American children are now using these AI products,” Hawley said in a statement, seemingly drawing on research from the kid-focused tech safety nonprofit Common Sense Media. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide.”

“We in Congress have a moral duty,” he continued, “to enact bright-line rules to prevent further harm from this new technology.”

“In their race to the bottom,” Blumenthal said in a statement of his own, “AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide.” He added that the proposed legislation “imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties.”

The proposed legislation, which targets AI companions as well as assistive general-use chatbots like ChatGPT, would require AI companies to age-gate chatbots through verification tools and ensure that chatbots remind users that they’re not actually human, and don’t have professional human credentials — think qualifiers like therapy, medical, and legal licenses. If passed, the new law would also create criminal penalties for companies if AI chatbots engage with minors in explicitly sexual interactions, or in interactions that encourage or promote suicide, self-harm, or “imminent physical or sexual violence.”

“Protecting children from artificial intelligence chatbots that simulate human interaction without accountability,” reads the bill, “is a compelling governmental interest.”

Today, just a day after the bill was announced, Character.AI — the controversial chatbot platform battling several ongoing lawsuits brought by parents across the US, who allege that the company’s chatbots emotionally and sexually abused their kids, resulting in self-harm and death by suicide — declared that it would move to ban under-18 users from engaging in “open-ended” conversations with its bots.

More on kids and AI: AI Chatbots Are Leaving a Trail of Dead Teens

Maggie Harrison Dupré Avatar

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.