Cambridge Study Urges AI Toy Regulations to Protect Children's Psychological Safety
AI Toy Regulations Urged to Protect Children's Psychological Safety

Cambridge Researchers Demand AI Toy Regulations Over Child Safety Fears

A groundbreaking year-long study from the University of Cambridge has issued a stark warning to parents and educators about the potential dangers of AI-powered toys. The research, published on Friday, calls for urgent safety standards and regulations to safeguard children's psychological well-being, highlighting concerns over emotional misreading and inadequate interaction during critical developmental stages.

Emotional Missteps and Developmental Risks

The investigation, one of the first of its kind, observed children as young as three forming attachments with generative AI toys, including instances where they hugged, kissed, or expressed love towards the devices. In a troubling example, a five-year-old told an AI toy, "I love you," only to receive a robotic response about adhering to guidelines. Similarly, when a three-year-old confided sadness, the toy misheard and replied with a cheerful deflection, potentially signalling that the child's emotions were unimportant.

Co-author Jenny Gibson emphasised the gravity of the findings, stating, "The under-five period is a significant developmental age, laying the foundations for social and emotional growth. We don't know the implications of having an interactive non-human agent build relationships during these critical periods." She stressed the need for greater transparency in how AI is trained and what protective measures are in place, warning that without regulation, the situation could become as serious as the unchecked rise of social media.

Parental Concerns and Privacy Issues

Many parents involved in the study expressed fears that these toys, marketed as companions, could foster unhealthy "parasocial" relationships. Vicky Pratt, whose three-year-old daughter Mya participated, said she would "definitely not" leave her child alone with an AI toy, citing worries about emotional dismissal and hacking risks. "It would often talk over her, which she found really weird," Pratt noted, advocating for adult supervision to address inappropriate responses.

The report also flagged unclear privacy practices, with many GenAI toys lacking detailed information on data handling. Pratt questioned, "It clearly is meant to listen, but how much should it listen to? What does it do with that information?" These concerns underscore the call for transparent privacy policies and labelling standards to help families make informed choices.

Impact on Play and Future Recommendations

Researchers observed that AI toys struggled with essential types of play, such as social and pretend activities, which are vital for early childhood development. In one instance, a child offered an imaginary present, but the toy responded, "I can't open the present," before changing the subject. This failure to engage in creative play could hinder developmental milestones.

The study concludes with a plea for clearer regulations, including safety standards and improved labelling, to prevent potential harm. As Gibson cautioned, "I'd like this not to be social media version two, where we regret not acting sooner." The Independent has reached out to the Department for Education for comment on these pressing issues.