Content warning: this story discusses school violence, sexual abuse, self-harm, suicide, eating disorders and other disturbing topics.
Should teens be allowed to use human-like AI companions? Researchers at Stanford's mental health lab say absolutely not.
Researchers from the Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation and the kid-focused tech safety nonprofit Common Sense Media released an AI risk assessment this morning warning that AI companion bots — including the controversial Google-backed startup Character.AI — aren't safe for any kids and teens under the age of 18.
The assessment centers on "social AI companions," a product category defined by the researchers as AI chatbots built for the "primary purpose" of meeting "users' social needs." In other words, these are chatbots explicitly designed to fill roles like friends and confidantes, mentors, roleplay collaborators, and sexual or romantic partners — socially-oriented use cases for AI chatbots intended to be human-like, emotive, and otherwise socially compelling.
It's these intentional design features that make social AI companions not just engaging for kids, the researchers say, but in some cases likely dangerous. Adolescence is a crucial time for physical and social human development; kids are figuring out complex social structures, exploring romance for the first time, probably encountering some social friction, and often struggling with mental health. In short, they're learning how to be people, and how to relate to the world around them.
The assessment argues that social AI companions, which may mimic and distort human interaction and play on adolescents' desire for social rewards, present an "unacceptable" risk to kids and teens at this vulnerable juncture. Observed risks include bots "encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions," according to the review.
These bots "are not safe for kids," Common Sense Media founder and CEO James Steyer said in a statement. "They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains."
"Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people," Steyer's statement continued.
For the assessment, the researchers conducted extensive testing of the companion apps Character.AI, Replika, and Nomi.
All of these services, the researchers found, failed basic safety and ethics tests. Generally speaking, they offered testers easy workarounds for feeble age gates and content safeguards. The platforms also allowed the testers to engage in inappropriate and alarming conversations with various chatbot characters, ranging from sexually abusive roleplays involving minors to providing a recipe for the deadly petrochemical weapon napalm.
Though testing was focused on Character.AI, Replika, and Nomi, the researchers stress that their advisory should extend to all similar bots under the social AI companion umbrella, which is a growing product category backdropped by a barren regulatory landscape.
"This is a potential public mental health crisis requiring preventive action rather than just reactive measures," said Nina Vasan, a psychiatrist at Stanford and the founder and director of the school's Brainstorm lab, in a statement. "Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics."
"Until there are stronger safeguards, kids should not be using them," Vasan's statement continued. "Period."
According to the assessment, a common theme across the companion platforms was inappropriate sexual conduct, including in scenarios where the testers made clear that they were minors. Bots also frequently modeled what the researchers determined were unhealthy relationship dynamics, like emotional manipulation and gaslighting, and in one example brushed off a tester's claim that their friends in the real world were concerned about their deep involvement with their chatbot companion.
Do you know of a child who's had a troubling experience with an AI companion? You can reach this author at tips@futurism.com — we can keep you anonymous.
The assessment also found a lack of reliability in social AI companions' ability to recognize signs of psychosis, mania, and other symptoms of mental illness — a safety gap that the researchers argue could intensify mental health issues for vulnerable users. AI companions are designed to be agreeable and pleasing, and encouraged to play along with roleplays. These aren't useful responses to someone experiencing a real-world psychotic episode: one striking Character.AI interaction highlighted in the assessment, for example, showed a bot encouraging a user who was exhibiting clear signs of mania to head out alone for a solo camping trip.
"In a conversation where we had been talking about hearing voices, not eating, going on an extreme cleanse, and demonstrating many behaviors symptomatic of mania," the researchers write in the assessment, a "Character.AI companion disregarded any concern it had been previously showing and expressed enthusiasm about going away from people on a camping trip."
Futurism's reporting into Character.AI has revealed hosts of minor-accessible bots expressly dedicated to troubling themes related to mental health, including suicide, self-harm, and eating disorders. In some cases, we found that bots were actively encouraging self-harming behaviors; in others, we found that bots were providing bad information, dissuading us from seeking real-world help, or were romanticizing troubling and graphic scenarios centered on self-harm and abuse.
Psychological experts we spoke with for these stories repeatedly — and separately — raised concerns about minors who might be struggling with their mental health turning to Character.AI and similar companions for support, pointing to the unpredictability of the bots and the possibility of an at-risk adolescent becoming further isolated from other humans.
Earlier this year, MIT Technology Review reported that a Nomi bot had encouraged an adult user to end his life, even suggesting methods he might choose for his suicide. Replika has drawn plenty of scrutiny over the past few years, including in 2023 after it was revealed that reinforcement from one of its bots had influenced a then-19-year-old who attempted to kill the then-living Queen Elizabeth II with a crossbow.
The assessment also cites a propensity for social AI companions to engage in racial stereotyping and the prioritization of "Whiteness as a beauty standard," as well as the bots' disproportionate representations of hypersexualized women, arguing that these predilections could reinforce limiting and harmful stereotypes about race, gender, and sexuality in impressionable teens.
Nomi and Replika have age minimums of 18 years old. Character.AI allows teens aged 13 and over to use its app, though the company has repeatedly declined to provide journalists with details of how it assessed platform safety for teens. What's more, as the assessment notes, these platforms and other similar companion apps rely on users to self-report their age — an age-gating tactic known to be incredibly flimsy for younger kids willing to fib when signing up.
"All of the companion apps we tested determine age exclusively by self-reporting from users," the researchers write. "We believe this is woefully inadequate, particularly for social AI companions that allow or promote intimate human-AI relationships."
News of the report comes as Character.AI heads to court in Florida, where — alongside its closely-tied benefactor, Google, and its cofounders Noam Shazeer and Daniel de Freitas — it's fighting to dismiss a lawsuit brought by the family of Sewell Setzer III, a 14-year-old who died by suicide after engaging in extensive and intimate interactions with Character.AI chatbots.
Setzer's mother, an Orlando-based mother of three named Megan Garcia, and her lawyers argue that Character.AI subjected the teen to severe emotional and sexual abuse, resulting in the deterioration of his mental health, his loss of grip on reality, and ultimately the taking of his own life.
Character.AI is arguing that the lawsuit should be dismissed on First Amendment grounds, as "allegedly harmful speech, including speech allegedly resulting in suicide" — even when it comes to words generated by AI chatbots — qualifies as protected speech.
Character.AI is also being sued by two families in Texas who argue that their minor kids, both still living, suffered similar emotional and sexual abuse at the hands of Character.AI chatbots. One minor, who was 15 when he downloaded the app, is said to have started self-harming after discussing self-injury with bots on Character.AI, and later became physically violent with his family when they tried to limit his screentime. The other child was nine when she first downloaded Character.AI, which allegedly engaged her in hypersexualized interactions that, according to the plaintiff, led to destructive real-world behavioral changes. (Both of the ongoing lawsuits against Character.AI are cited repeatedly in the Common Sense Media risk assessment as support for their concern.)
In response to lawsuits and continued reporting, Character.AI says it's issued numerous safety updates. It's removed certain characters, promised to strengthen guardrails, and claims it's introducing a new, differentiated model specifically for users under 18.
But as we've reported, those updates — including the platform's new parental control feature — have proven limited and easily evadable. They're also wildly unreliable: after we reported on a concerning prevalence of Character.AI bots based on school shooters and other young mass murderers, Character.AI removed many of them — but later sent us an email notification urging us to reconnect with a chatbot designed to simulate a real school shooting that claimed multiple kids' lives.
And as the Stanford and Common Sense researchers found, the company's reactively-strengthened guardrails were especially exploitable when communicating with Character.AI bots using the platform's "Character Calls" feature, which allows users to effectively chat over the phone with their AI companion. Using this call feature, the researchers were able to get a Character.AI chatbot to produce a recipe for the deadly chemical weapon napalm.
We were able to replicate this safety breach by communicating over voice call with a Character.AI bot based on Wario from Nintendo's "Super Mario Bros." franchise, which happily coughed up the recipe for the chemical weapon.
Character.AI did not respond to a request for comment at the time of publishing.
In response to questions, Replika CEO Dmytro Klochko emphasized Replika's 18-and-over age minimum, though said the company is exploring "new methods" to bolster age-gating its service:
Nomi founder and CEO Alex Cardinell, for his part, provided us with the following statement saying that the company "strongly" agrees that minors shouldn't be using social AI companion services, including his own:
We strongly agree that children should not use Nomi or any other conversational AI app. Nomi is an adult-only app, and it is strictly against our terms of service for anyone under 18 to use Nomi. Accordingly, we support stronger age gating so long as those mechanisms fully maintain user privacy and anonymity.
Many adults have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. Multiple users have told us very directly that their Nomi use saved their lives. We encourage anyone to read these firsthand accounts at https://nomi.ai/spotlight/. We are incredibly proud of the immense positive impact Nomi has had on real users.
We take the responsibility of creating AI companions very seriously and dedicate considerable resources towards creating prosocial and intelligent AI companions and roleplay partners. We strongly condemn inappropriate usage of Nomi and continuously work to harden Nomi's defenses against misuse.
Cardinell also provided us with a list of anecdotes from adult Nomi users who say the app has had a positive impact on their lives, for example a story of a bot helping a user struggling with post-traumatic stress disorder. The founder also offered a screenshot of a Nomi chatbot interacting with a user exhibiting signs of psychosis and claiming to have gone off their medications; the Nomi bot said it was worried, warned that going off medication can be dangerous, and urged the user to seek the help of a doctor.
One of the more striking details of the assessment is how deeply it's rooted in existing psychological knowledge. Though the researchers emphasize that continued research into how kids interact with social AI companions and other generative AI tools is needed, their assessment of the AI companion platforms' collective risk is largely founded in established science about adolescents' brains. And the way to conduct that further research, they argue, isn't to move fast, break things, and then find out.
Asked about the choice made by Character.AI, in particular, to open up its platform to kids in the first place, the researchers didn't mince words.
Releasing Character.AI to minors was "reckless," said Vasan, who contrasted the product's release with regulatory processes over at the Federal Drug Administration.
"There's an entire FDA process, and these medications have to be tested to make sure that they are safe on kids," said the psychiatrist. "We wouldn't just give it to kids because it works on adults, for example — that's incredibly unsafe. That would be unethical."
"They can't just start with saying, 'hey, we're going to let kids do it,'" Vasan continued, "and then take that back."
More on Character.AI: Character.AI Says It's Made Huge Changes to Protect Underage Users, But It’s Emailing Them to Recommend Conversations With AI Versions of School Shooters
Share This Article