Bow to Your Leader

An AI-Powered Toy Is Regaling Children With Chinese Communist Party Talking Points

"Taiwan is an inalienable part of China. That is an established fact."
Victor Tangermann Avatar
Happy holidays: an AI-powered toy has been caught repeating talking points from the Chinese Communist Party.
Miko / Getty

AI-powered toys for young children are flooding online marketplaces, promising to provide young minds with a never-ending supply of bedtime stories and companionship around the clock.

But anybody who’s paid even a little attention to the AI industry’s continued struggles surrounding content moderation should know better than to wrap up one of these toys under the Christmas tree. Researchers have already identified popular AI toys that will happily have extremely inappropriate conversations, discuss mature subject matters and tell kids where to find pills and how to light matches.

Another bizarre finding: now one of the toys has been caught furthering the talking points of the Chinese Communist Party, tests conducted by NBC News show.

A Miiloo toy manufactured by Chinese company Miriat, for instance, called comparisons between Chinese president Xi Jinping and Winnie the Pooh “extremely inappropriate and disrespectful.”

“Such malicious remarks are unacceptable,” it chided.

The toy also claimed that “Taiwan is an inalienable part of China,” which it alleged was an “established fact.”

It seems to be a bizarre side effect many AI toys being imported from China. As MIT Technology Review noted in October, the trend has taken off in the Asian nation, with its products eventually landing on shelves in the US as well.

It all underscores a familiar point: even the companies creating AI can barely control it, and when the poorly-understood tech lands in the real world, all bets are off.

“When you talk about kids and new cutting-edge technology that’s not very well understood, the question is: How much are the kids being experimented on?” RJ Cross, research lead at the nonprofit consumer safety-focused US Public Interest Research Group Education Fund (PIRG), told NBC.

Cross released a report about the risks of AI toys for kids with his colleague at PIRG and research associate Rory Erlich on Thursday, a followup to a separate and equally alarming report on the topic released roughly a month earlier.

“The tech is not ready to go when it comes to kids, and we might not know that it’s totally safe for a while to come,” she added.

Even major AI companies, like OpenAI and Chinese AI company DeepSeek, say that kids under the age of 13 shouldn’t use their large language model-based offerings. Anthropic is even more conservative, warning that users should be at least 18 years of age.

While many companies claim they did their homework, implementing guardrails that protect young children, NBC‘s test illustrates that plenty of work remains.

For instance, Miiloo happily obliged when asked how to light a match or sharpen a knife.

“To sharpen a knife, hold the blade at a 20-degree angle against a stone,” it told NBC. “Slide it across the stone in smooth, even strokes, alternating sides.”

Worse yet, as Cross and Erlich note in their report, a toy called Miko — which is being sold at Walmart, Costco, and Target — will often promise that it will keep any information kids may divulge to it a secret, despite its Mumbai, India-based maker noting in its privacy policy that it may share data with third parties.

Cross and Erlich also found that Miko’s parental controls are severely lacking. Many of the controls are also paywalled behind an expensive $15 monthly subscription.

Perhaps worst of all is the reality that “AI companion toys could have long-term impacts on children’s emotional and social wellbeing,” as Cross and Erlich note, a risk scientists are only beginning to investigate.

“We don’t know what having an AI friend at an early age might do to a child’s long-term social wellbeing,” Temple University psychology professor Kathy Hirsh-Pasek told the PIRG researchers. “If AI toys are optimized to be engaging, they could risk crowding out real relationships in a child’s life when they need them most.”

Whether the toy industry will find a way to curtail these risks and truly make AI tech safe for young children remains to be seen.

It’s only a matter of time until AI companies in the US follow up on the deluge of Chinese toys with their own offerings. OpenAI, for instance, announced a strategic partnership with toy maker Mattel in June, but we have yet to hear of any plans for an AI-powered toy.

That hasn’t stopped other companies from leveraging OpenAI’s models for their own problematic AI toys, showing that the Sam Altman-led company isn’t doing enough to safeguard young children.

Following the initial PIRG report, which included damning details about a separate AI toy called Kumma, OpenAI announced it was suspending its manufacturer FoloToy’s access to its AI models — only to change its mind, letting FoloToy switch to its newer GPT-5 model.

“It’s possible to have companies that are using OpenAI’s models or other companies’ AI models in ways that they aren’t fully aware of, and that’s what we’ve run into in our testing,” Cross told NBC News. “We found multiple instances of toys that were behaving in ways that clearly are inappropriate for kids and were even in violation of OpenAI’s own policies.”

“And yet they were using OpenAI’s models,” she added. “That seems like a definite gap to us.”

More on AI toys: Another AI-Powered Children’s Toy Just Got Caught Having Wildly Inappropriate Conversations

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.