No Kidding

Amazon Still Selling Multiple OpenAI-Powered Teddy Bears, Even After They Were Pulled Off the Market

Has OpenAI vetted these?
Frank Landymore Avatar
OpenAI pulled access to a GPT-4o powered toy caught giving inappropriate responses, but it apparently still thinks these ones are okay.
Illustration by Tag Hartman-Simkins / Futurism. Source: Ebloma ; Getty Images

Last week, OpenAI said it cut off the toymaker FoloToy’s access to its AI models after the AI-powered teddy bear “Kumma,” which ran GPT-4o, was found giving responses that were wildly inappropriate for children — including discussing sexual fetishes, and giving instructions on how to find knives and light matches.

The move signaled that the ChatGPT-maker was clearly concerned about how its business customers, especially ones selling products for children, were using its tech, or at least how these efforts looked. But it also raised the question of what else OpenAI was doing to regulate its role in this brave new world of AI chatbot-powered toys: is it behaving proactively by seeking out customers that are misusing its tech, or just acting reactively in response to unflattering headlines?

Signs point to the latter. On Amazon, for instance, several AI teddy bears claiming to run ChatGPT or some form of OpenAI model remain available for sale even after the company cut ties with FoloToy.

One, called “Poe the AI Story Bear” sold by the San Francisco startup PLAi, was prominently featured by CNET last year. Also powered by GPT-4o, Poe’s selling point is that it can create magical, custom bedtime stories for children on the fly, which it reads in an AI voice synthesized using tech from the firm ElevenLabs, according to a press release. It also claims to only produce “100% safe content” for children using “Play Safe technology.” Over fifty have been sold in the past month, the Amazon page says.

Others seem pointedly dubious. A brand called “EBLOMA” sells an AI-powered teddy bear under various names, including “WITPAW.” The product’s Amazon listing says that the stuffed animal was “built with ChatGPT,” and is capable of providing “emotional support” and “continuous companionship.”

That’s striking. Many AI chatbot providers — and AI toymakers — have avoided giving their models long-term “memory” across conversations amid concerns about the dangerous emotional and psychological effects that an AI’s sycophantic responses can have on users. EBLOMA, meanwhile, proudly boasts that its toy will be as familiar with your child as possible.

“WITPAW understands tone, remembers names, and grows with your child — making every interaction feel personal and real,” its website reads. (Its marketing materials are also riddled with seemingly AI-generated copy, like you can see in the image at the top of this story, as well as text proclaiming users can “Experience Scenarios.”)

Have you or your family had strange interactions with an AI-powered toy? Reach out at tips@futurism.com. We can keep you anonymous.

OpenAI did not respond to a request for comment when asked if it had determined if these products are up to its own safety standards. Elaborating on its decision to block FoloToy, an OpenAI spokesperson recently told Gizmodo that the company’s “usage policies prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we monitor and enforce them to ensure our services are not used to harm minors.”

The scrutiny into AI-powered toys comes after a report from the safety group PIRG detailed how several of these children’s products gave egregiously inappropriate responses, especially during longer conversations. Among the tested toys was FoloToy’s Kumma, which was caught explaining how to find and light matches. But its most alarming blunder was openly discussing sexual “kinks,” including bondage and teacher-student roleplay.

In response to the findings, a company spokesperson told PIRG that it was temporarily suspending sales of all FoloToy products. “We are now carrying out a company-wide, end-to-end safety audit across all products,” the spokesperson said.

More on AI: Child Development Researcher Issues Warning About AI-Powered Teddy Bears Flooding Market Before Christmas

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.