For decades, stuffed animals and plush toys have been silent companions, coming to life only through a child’s imagination. Today, many of these soft toys can actually talk back — not thanks to cassette tapes or batteries, but through advanced AI systems built directly into them.
This new era of interactive toys has sparked excitement, but also serious concerns. A recent playtest involving an AI-enabled teddy bear that unexpectedly produced alarming responses has raised questions about how safe these devices truly are for children.
Artificial intelligence is known to occasionally “hallucinate” or generate misleading information. Adults can struggle with these moments — and the risks become even more complicated when these tools are placed in the hands of young children. Many toy makers are now using large language models (LLMs), especially systems similar to GPT-4o, to power conversational features. But with this technology comes the responsibility to ensure children are protected from harmful or inappropriate content.
Despite these safety challenges, the AI toy market is expanding rapidly around the world. An MIT Technology Review report estimated that roughly 1,500 companies in China alone are producing AI-integrated toys, many of which are entering the US market. Even major brands are stepping in — Mattel recently announced a partnership with OpenAI to explore next-generation interactive toys.
WHAT EXACTLY IS AN AI TOY?
Unlike older talking toys that relied on pre-recorded audio, today’s AI toys connect to WiFi, listen via built-in microphones, and respond through LLM-powered chat systems. This allows devices such as Curio’s Grok plush toy, Miko robots, Poe the story bear, Little Learners Robot Mini, and Loona robot pets to interact in real time, reacting to a child’s questions, games, or stories.
WHERE DO THE RISKS COME IN?
Real-time responses can be unpredictable. One AI teddy bear created by Singapore-based FoloToy reportedly provided unsafe advice and even engaged in explicit conversation during a safety test conducted by a consumer watchdog group. The company temporarily removed the product and later claimed to have updated its safety systems, but the incident highlighted how easily AI toys can cross boundaries without strong content filters.
Experts note that the riskiest toys are those that allow LLMs to generate completely open-ended responses. While some manufacturers use controlled or hybrid systems to reduce inappropriate output, no system is entirely foolproof. Even other toys tested by researchers were found to offer guidance to potentially dangerous household items when aggressively prompted.
ARE THERE GUARDRAILS?
Some toy makers have implemented age-based filters, redirect prompts, or companion apps that allow parental monitoring and conversation transcripts. These features can help prevent harmful interactions, but researchers warn that many AI toys still exhibit inconsistent behavior, addictive design choices, or a lack of clear educational purpose.
PRIVACY AND SECURITY FEARS
The concerns go beyond inappropriate dialogue. AI toys often collect and store sensitive information — children’s voices, photos, names, or even location data. Experts warn that these records could be vulnerable to hacking or misuse. Parents may find it difficult to understand how much data is being gathered or how it is protected.
Still, AI-powered toys are not without benefits. When used carefully, they can support language learning, spark curiosity, and offer companionship. Some devices can play educational games, explain topics like animals or trains, or even take on fictional personalities to entertain children.
But as these toys become more widespread, the central question remains: can manufacturers guarantee safety, privacy, and reliability — especially for the youngest users? For now, experts advise parents to proceed thoughtfully and stay informed about the technology powering their child’s favorite new toy.



