White Flag

Meta Just Quietly Admitted a Major Defeat on AI

Sorry, AI-addicted teens.
Frank Landymore Avatar
Meta quietly announced that it will cut off teen users' access to its AI character — if only for the time being.
Sean M. Haffey/Getty Images

Meta says it’s cutting off teenagers’ access to its AI characters — at least until it can build “better” ones.

The Mark Zuckerberg company announced the change on Friday, signaling at least some degree of hesitation at the company over how young users are engaging with its chatbots amid mounting concern over the tech’s effects on mental health and safety.

“Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” Meta said in an updated blog post. “This will apply to anyone who has given us a teen birthday, as well as people who claim to be adults but who we suspect are teens based on our age prediction technology.”

The update follows an announcement from Meta in October, when it said that parents would be able to use new tools for supervising their children’s interactions with AI characters, including the  ability to cut off their access to the characters entirely. The announcement also described a feature that would provide parents “insights” about the topics their teens were discussing in the AI conversations.

Meta originally promised to release these tools early this year, but that hasn’t come to pass. Now, in its new announcement, the company is saying that it’s building a “new version” of AI characters to “give people an even better experience,” so it’s developing the promised safety tools from scratch and cutting off teen access in the meantime.

Concerns over teenage use of AI chatbots has fueled the broader conversation around AI safety and the phenomenon of AI psychosis, the term some experts are using to describe delusional mental health spirals that are encouraged by an AI’s sycophantic responses. Numerous cases have ended in suicide, many of them being teenagers. The bots remain wildly popular, with one survey finding that one in five high schoolers in the US say they or a friend have had a romantic relationship with an AI.

Meta has come under particular scrutiny, with an internal document allowing underage kids to have “sensual” conversations with its AI — and chatbots based on celebrities including John Cena having wildly inappropriate sexual conversations with users who identified themselves as young teens.

Meta isn’t the only chatbot platform to buckle under scrutiny. The website Character.AI, which offers AI companions similar to Meta’s and was popular with teams, banned minors from the platform last October, after being sued by several families who accused the company’s chatbots of encouraging their children to take their own lives.

More on AI: Meta Caught Saying It’s OK for Underage Children to Have “Romantic or Sensual” Conversations With AI

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.