Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
Character.AI, the chatbot platform accused in several ongoing lawsuits of driving teens to self-harm and suicide, says it will move to block kids under 18 from using its services.
The company announced the sweeping policy change in a blog post today, in which it cited the “evolving landscape around AI and teens” as its reason for the shift. As for what this “evolving landscape” actually looks like, the company says it’s “seen recent news reports raising questions” and has “received questions from regulators” regarding the “content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”
Nowhere in the blog post does Character.AI mention the multiple lawsuits that specifically accuse the company, its founders, and its closely-tied financial benefactor Google of releasing a “reckless” and “negligent” product into the marketplace, allegedly resulting in the emotional and sexual abuse of minor users. The announcement also doesn’t cite any internal safety research.
Character.AI CEO Karandeep Anand, who took over as chief executive of the Andreessen-Horowitz-backed AI firm in June, told The New York Times that Character.AI is “making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them.” Anand reportedly declined to comment on the ongoing lawsuits.
It’s a jarring 180-degree turn for Anand, given that the CEO told Wired as recently as August of this year that his six-year-old daughter loves to use the app, and that he felt the app’s disclaimers were clear enough to prevent users from believing their relationship with the platform is anything deeper than “entertainment.”
“It is very rarely, in any of these scenarios, a true replacement for any human,” Anand told Wired, when asked if he was concerned about his young child developing human-like bonds with AI chatbots. “It’s very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion.”
It remains a little hazy exactly what Character.AI is doing. While it’s saying that it’ll work to block teens from engaging in open-ended chats, it says it’s “working” on building “an under-18 experience that still gives our teen users ways to be creative,” for example by creating images and videos with the app.
Per its blog post, the company says it’ll do three things over the next several weeks: remove the ability for teens to engage in “open-ended” chats with AI companions, a change that will take place by the end of the month; roll out a “new age assurance functionality” that, per Anand’s comments to the NYT, involves using an in-house tool that analyzes user chats and their connected accounts; and establish an AI Safety Lab, which Character.AI says will be an “independent non-profit” devoted to ensuring AI alignment.
Character.AI came under scrutiny in October 2025 when a Florida mother named Megan Garcia filed a first-of-its-kind lawsuit against the AI firm, alleging that its chatbots had sexually abused her 14-year-old son, Sewell Setzer III, causing his mental breakdown and eventual death by suicide. Similar suits by other parents in Texas, Colorado, and more states have followed.
In a statement, Tech Justice Law Project founder Meetali Jain, a lawyer for Garcia, said the chatbot platform’s “decision to raise the minimum age to 18 and above reflects a classic move in the tech industry’s playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people.”
Jain added that while the shift is a step in the right direction, the promised changes “do not address the underlying design features that facilitate these emotional dependencies — not just for children, but also for people over the age of 18 years.”
More on kids and AI companions: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions