OpenAI has stopped a toymaker from integrating its conversational AI models into an interactive teddy bear designed for children, citing serious safety and privacy shortcomings. The decision marks one of the company’s strongest interventions yet as it enforces stricter rules on AI use in child-facing products.
The proposed toy would have allowed kids to talk directly to a plush companion capable of answering questions, telling stories and guiding activities. But after reviewing early prototypes and documentation, OpenAI determined the product could expose children to unpredictable responses, inadequate data protections and weak parental-consent controls.
According to people familiar with the review, the company concluded that the safety measures “did not meet the standard required for autonomous AI interactions with minors.”
“It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in a new statement. “Removing one problematic product from the market is a good step, but far from a systemic fix.”
The case reflects a wider reckoning in the tech industry. As AI-enabled toys become more common, researchers warn that children naturally anthropomorphize these devices, treating them as friends or authority figures. That dynamic raises the stakes: any inaccurate, unsafe or age-inappropriate answer can have outsized impact on a child who assumes the toy is trustworthy.
While some startups are pushing aggressively into AI companions, major players including Google DeepMind, Meta and Anthropic have taken a slower approach, arguing that current language models are not reliable enough for unsupervised child interactions.
Over the past decade, several interactive toys were found to be recording children’s voices or transmitting data without proper disclosure. Regulators in Europe and the US tightened rules, and privacy groups have urged companies to treat kids’ data with the highest level of protection.
The report also details broader safety hazards across the toy market, from lead and phthalates in plastic toys to counterfeit products such as fake Labubu dolls that circumvent required safety testing. Water beads, responsible for thousands of child injuries, will now face tighter restrictions when marketed as toys. The investigation even uncovered recalled items that were still being sold despite regulations. Researchers further highlighted the dangers of button cell batteries and high-powered magnets, which can be life-threatening if swallowed.
Even with strict filters, today’s large language models occasionally produce unpredictable or sensitive content, especially when responding to childlike speech. Safety experts say this inconsistency makes them unsuitable for always-on toys.
For now, OpenAI’s decision draws a clear boundary: its most advanced AI tools will not power autonomous toys for children, not until the technology can guarantee the kind of safety that young users deserve.