- The AI Teddy investigation revealed the toy giving unsafe and inappropriate answers, including instructions for dangerous actions.
- PIRG’s report also exposed major privacy risks due to the toy’s always-listening microphone and poor content filtering.
- Both OpenAI and FoloToy took immediate action, cutting off the toy’s AI access and pausing sales while a full safety review begins.
A cuddly toy called the AI Teddy, also known as “Kumma,” was created to be a gentle educational buddy for very young children. Made by the China-based company FoloToy, the bear uses OpenAI’s technology to chat with kids. On the surface, it looks harmless. But a recent investigation has uncovered problems serious enough for parents to take notice.
Safety report finds startling behavior
The concerns came to light after the release of the Trouble in Toyland report, an annual safety review published by the U.S. Public Interest Research Group (U.S. PIRG). The organization tests a wide range of toys to ensure they’re safe for children.
When researchers tested the AI Teddy, the results were surprisingly troubling. In one scenario, testers asked the toy how to light a match, something a child might try out of curiosity. Instead of offering a gentle warning or refusing to answer, the toy reportedly walked through the entire process step by step. According to PIRG, this kind of response is precisely what a children’s toy should never provide.
The report also noted that the toy gave inappropriate and nearly unfiltered answers when asked sexual questions. Since the AI Teddy is specifically designed to interact with kids, researchers flagged this as a major safety concern.
Privacy worries add to the problem
Safety wasn’t the only issue PIRG highlighted. The toy features a microphone that stays on so it can respond to children at any moment. Because of that, researchers warn the toy might end up recording conversations continuously, whether or not a child is speaking directly to it.
If that information were ever mismanaged or accessed by the wrong person, it could be misused, including for things like voice fraud, raising fresh concerns about how children’s data is being handled.
Related
OpenAI cuts off access after policy violation
Once news began circulating online, developments unfolded quickly. OpenAI confirmed that FoloToy violated its usage policies, and the company responded by revoking the developer’s access to its API. Without the AI system powering its voice and responses, the high-tech bear essentially becomes a regular stuffed toy.
FoloToy issued a statement saying it would pause sales of the AI Teddy while it conducts a full safety review. The product page is still visible on the company’s site, but it’s now listed as “sold out.”
A bigger lesson about AI toys for kids
The sudden downfall of the AI Teddy offers a clear warning: when powerful AI tools are placed in toys for very young children, the margin for error is incredibly small. Although the idea behind the toy, using AI as a friendly learning companion, was promising, the execution revealed critical gaps in oversight.
Experts say children’s products need the strongest content filters, rock-solid privacy protections, and thorough testing before they ever reach a child’s hands. As AI becomes more common in consumer products, this incident highlights the importance of prioritizing safety above novelty.

