Is Mattel endangering your kid's development by shoving AI into its toys?
The multi-billion dollar toymaker, best known for its brands Barbie and Hot Wheels, announced that it had signed a deal to collaborate with ChatGPT-creator OpenAI last week. Now, some experts are raising fears about the risks of thrusting such an experimental technology — and one with a growing list of nefarious mental effects — into the hands of children.
"Mattel should announce immediately that it will not incorporate AI technology into children's toys," Robert Weissman, co-president of the advocacy group Public Citizen, said in a statement on Tuesday. "Children do not have the cognitive capacity to distinguish fully between reality and play."
Mattel and OpenAI's announcements were light on details. AI would be used to help design toys, they confirmed. But neither company has shared what the first product to come from this collab will be, or how specifically AI will be incorporated into the toys. Bloomberg's reporting suggested that it could be something along the lines of using AI to create a digital assistant based on Mattel characters, or making toys like the Magic 8 Ball and games like Uno more interactive.
"Leveraging this incredible technology is going to allow us to really reimagine the future of play," Mattel chief franchise officer Josh Silverman told Bloomberg in an interview.
The future, though, is looking dicey. We're only just beginning to grapple with the long-term neurological and mental effects of interacting with AI models, be it a chatbot like ChatGPT, or even more personable AI "companions" designed to be as lifelike as possible. Mature adults are vulnerable to forming unhealthy attachments to these digital playmates — or digital therapists, or, yes, digital romantic partners. With kids, the risks are more pronounced — and the impact longer lasting, critics argue.
"Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children," Weissman said. "It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
As Ars Technica noted in its coverage, an Axios scoop stated that Mattel's first AI product won't be for kids under 13, suggesting that Mattel is aware of the risks of putting chatbots into the hands of younger tots.
But bumping up the age demographic a notch hardly curbs all the danger. Many teenagers are already forming worryingly intense bonds with AI companions, to an extent that their parents, whose familiarity with AI often ends at ChatGPT's chops as a homework machine, have no idea about.
Last year, a 14-year-old boy died by suicide after falling in love with a companion on the Google-backed AI platform Character.AI, which hosts custom chatbots assuming human-like personas, often those from films and shows. The one that the boy became attached to purported to be the character Daenarys Targaryen, based on her portrayal in the "Game of Thrones" TV series.
Previously, researchers at Google's DeepMind lab had published an ominous study that warned that "persuasive generative AI" models — through a dangerous mix of constantly flattering the user, feigning empathy, and an inclination towards agreeing with whatever they say — could coax minors into taking their own lives.
This isn't Mattel's first foray into AI. In 2015, the toymaker debuted its now infamous line of dolls called "Hello Barbie," which were hooked up to the internet and used a primitive form of AI at the time (not the LLMs that dominate today) to engage in conversations with kids. We say "infamous," because it turned out the Hello Barbie dolls would record and store these innocent exchanges in the cloud. And as if on cue, security researchers quickly uncovered that the toys could easily be hacked. Mattel discontinued the line in 2017.
Josh Golin, executive director of Fairplay, a child safety nonprofit that advocates against marketing that targets children, sees Mattel as repeating its past mistake.
"Apparently, Mattel learned nothing from the failure of its creepy surveillance doll Hello Barbie a decade ago and is now escalating its threats to children's privacy, safety and well-being," Grolin said in a statement, as spotted by Malwarebytes Labs.
"Children's creativity thrives when their toys and play are powered by their own imagination, not AI," Grolin added. "And given how often AI 'hallucinates' or gives harmful advice, there is no reason to believe Mattel and OpenAI's 'guardrails' will actually keep kids safe."
The toymaker should know better — but maybe Mattel doesn't want to risk being left in the dust. Since the advent of more advanced AI, some manufacturers have been even more reckless, with numerous LLM-powered toys already on the market. Grimly, this may simply be the way that the winds are blowing in.
More on AI: Solar Company Sues Google for Giving Damaging Information in AI Overviews
Share This Article