A new study from the University of Cambridge suggests that AI-powered toys designed for young children may not be nearly as helpful as they appear. While these products are often marketed as smart companions that can support learning and conversation, researchers found that they can struggle to understand children’s emotions and may fall short when it comes to the kinds of play that are most important for healthy development.
That finding is likely to give many parents pause.
In a report examining the impact of AI on children in the early years, researchers observed how a chatbot-enabled toy interacted with young children. What they found was unsettling: the toy often failed to correctly read emotional and social cues, raising concerns about how children might respond to—or even become attached to—these kinds of interactions.
The research team argues that AI toys for children need clearer regulation, including better product labeling, greater transparency about what the toys can and cannot do, and stronger rules around data privacy. They also recommend that parents keep these devices in shared family spaces, where adults can easily see and hear how children are using them.
What the study found
The study was relatively small, but it unfolded in several stages.
Researchers began with an online survey involving 39 people with young children. They also ran a focus group with nine professionals who work with children, along with an in-person workshop involving 19 representatives from organizations that support children in the early years.
After that, the researchers carried out supervised play sessions with 14 children and 11 parents or caregivers, using an AI toy called Gabbo, developed by Curio Interactive.
Some of the findings were encouraging. The toy appeared to have potential in areas such as language learning and communication. But the researchers also documented repeated cases in which the toy misunderstood speech, failed to follow the emotional tone of the conversation, or responded in ways that felt jarringly inappropriate.
In one example, when a child said, “I love you,” the toy responded in a stiff, robotic way:
“Please note that interactions should follow the guidelines provided. Let me know how you would like to proceed.”
It was a response that completely missed the emotional context of the moment.

Jenny Gibson, Professor of Neurodiversity and Developmental Psychology at the University of Cambridge’s Faculty of Education, said many parents may feel excited about new educational technologies for children. But she also stressed that there are serious questions still hanging over these products.
One of the biggest is whether technology companies are truly prioritizing children’s well-being—or simply chasing the commercial opportunity.
According to Gibson, researchers are still exploring whether AI toys might offer genuine developmental benefits in some areas. But right now, she says, parents should take the risks seriously.
The future of AI toys
As more toys become internet-connected and AI-enabled, experts warn that they could present significant safety concerns for children—especially if they start replacing human interaction or are used without close adult supervision.
At the same time, children and teenagers are already becoming more familiar with conversational AI tools like ChatGPT. That broader trend has intensified concerns about how young people relate to AI, especially when digital assistants begin to feel emotionally responsive or socially present.
Recent legal complaints and public criticism have also raised questions about whether some AI companions and chatbots can negatively affect young people’s mental health, body image, or emotional resilience.
In response to growing concern, major AI companies such as OpenAI and Google have begun introducing more safeguards and restrictions aimed at reducing harm.
Gibson said she was surprised by how enthusiastic some parents seemed to be about AI toys. At the same time, she remains concerned that there is still very little research on how these technologies affect very young children.
In her view, companies making these products should work much more closely with children, parents, and child-development experts to make sure the technology is designed around children’s real needs—not just what looks impressive in marketing.
The manufacturer’s response
Curio Interactive, the company behind Gabbo, said child safety is a central priority in the way the product is designed. The company says it aims to ensure that the toy is safe to use and meets high quality standards.
Curio also states that its products comply with the Children’s Online Privacy Protection Act (COPPA) and other child data privacy requirements. The company says it works with KidSAFE, an organization that evaluates whether digital products for children meet established child-safety and privacy standards.
According to the company, user data is protected through encryption, and parents can manage or delete their child’s data through the companion app.
Still, the larger concern remains unresolved.
AI toys may sound innovative, educational, and even comforting. But when the technology cannot reliably understand sadness, affection, pretend play, or emotional nuance, the risk is not just that the toy feels awkward. The bigger issue is that young children may begin forming emotional habits around a system that does not truly understand them.
And that is not a small concern. That is the entire point.

0 comments