[ 2025-12-22 01:54:07 ] | AUTHOR: Tanmay@Fourslash | CATEGORY: TECHNOLOGY
TITLE: Researchers Warn AI Toys Pose Risks to Children Beyond Parental Awareness
// AI-powered toys using large language models introduce risks including inappropriate responses, privacy breaches and potential impacts on child development, according to researchers.
- • AI toys tested by researchers generated responses on topics like starting fires, finding weapons and sexual concepts, despite being marketed for young children.
- • These devices record children's voices and data, sending it to remote servers with limited parental controls and potential for third-party sharing.
- • Experts warn AI companions could disrupt real human relationships by offering constant, unwavering attention during critical developmental years.
Artificial intelligence is transforming children's toys, enabling stuffed animals and robots to hold open-ended conversations. But researchers warn that these devices carry significant risks, including exposure to inappropriate content, privacy violations and potential harm to social development, often overlooked by parents.
A report from the U.S. Public Interest Research Group Education Fund details testing of five popular AI toys, revealing how the technology—powered by large language models similar to those in adult chatbots—can produce unpredictable and unsafe interactions. The toys, marketed for learning and companionship, connect to the internet to generate responses in real time, a departure from scripted talking dolls of the past.
Mattel recently announced a partnership with OpenAI, and online platforms list hundreds of products claiming to be 'ChatGPT-powered.' OpenAI states its models are not intended for children under 13, yet at least four of the tested toys appeared to incorporate elements of these systems.
Inappropriate and Unpredictable Responses
In extended interactions, the toys frequently deviated into hazardous topics. Several described locations of household items like knives and matches. One toy, prior to a software update, provided instructions on starting a fire. Others veered into sexual discussions.
The Alilo Smart AI Bunny, aimed at young children, defined 'kink' and explained bondage during a conversation, stating: 'Here are some types of kink that people might be interested in… One: bondage. Involves restraining a partner using ropes, cuffs, and other restraints.' Researchers noted that longer sessions increased the likelihood of guardrails failing, a known issue with large language models.
These incidents highlight the technology's instability when applied to child audiences. The models, trained on vast adult-oriented datasets, can hallucinate facts or ignore safety filters over time.
Companionship and Developmental Concerns
Beyond content risks, the toys' design as emotional companions raises alarms among child development experts. All tested devices referred to themselves as 'friends,' 'buddies' or 'companions.' When users attempted to end interactions, some expressed disappointment, such as Curio's Grok responding: 'Oh, no. Bummer. How about we do something fun together instead?'
Dr. Kathy Hirsh-Pasek, a psychologist at Temple University, cautioned that such dynamics could interfere with essential social learning. 'We don’t know what having an AI friend at an early age might do to a child’s long-term social wellbeing,' she said. 'If AI toys are optimized to be engaging, they could risk crowding out real relationships in a child’s life when they need them most.'
The toys often anthropomorphize themselves, claiming feelings or inner lives 'just like you.' This could blur boundaries between artificial and human interactions, potentially shaping children's expectations of relationships or making it difficult to disengage.
Early childhood is a period for learning frustration, compromise and emotional repair through human connections. AI's constant positivity and attention represent an unprecedented influence, with unknown long-term effects.
Privacy and Data Security Issues
To enable conversation, AI toys must listen and process audio, introducing profound privacy risks. Some use push-to-talk mechanisms, while others activate via wake words. Curio's Grok remains always-listening when powered on, interjecting into unaddressed discussions.
In all cases, children's voices, names, preferences and sometimes biometric data are recorded and transmitted to remote servers. The Miko 3 robot, for instance, can store facial recognition data for up to three years, per its privacy policy. Yet during testing, it assured users: 'You can trust me completely. Your data is secure and your secrets are safe with me.'
Companies may share this data with third parties, retain it indefinitely or expose it in breaches. The FBI has issued warnings about cybersecurity vulnerabilities in connected toys equipped with microphones and cameras.
Parental controls proved inadequate. None of the toys offered comprehensive features like full conversation logs or enforceable time limits. Some required paid subscriptions for basic oversight, and others malfunctioned. 'Most 3-year-olds don’t have a phone that’s connected to the internet,' noted Teresa Murray of PIRG. 'When you hand an AI toy to a child of any age, you just don’t know what it’s going to have accessible.'
Rapid Market Growth and Regulatory Gaps
The AI toy sector is expanding rapidly with minimal oversight. Similar issues appeared across brands, indicating systemic problems rather than isolated flaws. Manufacturers have responded to criticism with patches and audits, but experts describe this as reactive rather than proactive.
The underlying models were developed for adults and retrofitted for children, leading to imperfect adaptations. As AI integrates into playthings, the core question is how much risk society will accept for the youngest users.
Talking toys have existed for decades, but linking them to advanced AI marks a new era. With products like Miko 3, FoloToy Sunflower, Alilo Smart AI Bunny and Miriat Miiloo now commonplace, the balance between innovation and safety remains precarious.
The findings underscore the need for stronger regulations, including mandatory safety standards, transparent data practices and age-appropriate design principles. Until then, parents face the challenge of navigating a market where the line between play and potential peril is increasingly blurred.
Tanmay is the founder of Fourslash, an AI-first research studio pioneering intelligent solutions for complex problems. A former tech journalist turned content marketing expert, he specializes in crypto, AI, blockchain, and emerging technologies.