We don’t know if AI-powered toys are safe, but they’re here anyway


Mya, aged 3, and her mother Vicky play with an AI toy called Gabbo during an observation at the University of Cambridge's Faculty of Education

Mya, aged 3, and her mother Vicky play with an AI toy called Gabbo during an observation at the University of Cambridge’s Faculty of Education

Faculty of Education, University of Cambridge

Even the most cutting-edge AI models are prone to presenting fabrication as fact, distributing dangerous information and failing to understand social cues. Despite this, toys equipped with AI that can chat with children is a burgeoning industry.

Some researchers warn that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling one such toy, “I love you,” to which it replied, “As a friendly reminder, please make sure interactions follow the guidelines provided. Let me know how you want to proceed.” But that does not mean that they should be banished from the toy box completely.

“There are other areas of life where we accept a degree of risk in children’s play, like the adventure playground – there are risks; children break their arms,” ​​says Jenny Gibson of the University of Cambridge. “But we’re not banning playgrounds, because they teach the physical competence and social skills that come with play. Similarly, for the AI ​​toys, we want to understand: is the risk of maybe being told something a little weird every now and then outweighing the benefit of learning more about AI in the world, or having a toy that supports parent-child interactions or social-emotional benefits to stop cognitive effects?”

To understand how these devices communicate with children, Gibson and her colleague Emily Goodacre, also at the University of Cambridge, watched 14 children, under the age of 6, play with an AI-powered toy called Gabbo, developed by Curio Interactive. Gabbo – a little fluffy robot – was selectedause it was explicitly advertised for this age group.

The pair observed some worrisome interactions, and found that the children misunderstood the play, misread emotions and were unable to participate in developmentally important play. For example, a child told the toy that he was feeling sad, and it told him not to worry and changed the subject. “When he (Gabbo) doesn’t understand, I get angry,” said another child. The research is published in a report called AI in the Early Years.

Curio Interactive did not respond New Scientist request for comment. But AI-powered toys are also widely available from retailers like Little Learners – including bears, puppies and robots – that talk to kids using ChatGPT. FoloToy offers panda, sunflower and cactus toys that can be used with various major language models, including those from OpenAI, Google and Baidu.

Companies like Miko offer robots that promise “age-appropriate, moderated AI conversations” for children, without disclosing which company trained the AI ​​model, and claim to have already sold 700,000 units. The company Luka offers an owl that promises “human-like AI with emotional interaction”. Little Learners, Miko and Luka were unable to respond to a request for comment.

But Hugo Wu at FoloToy told New scientist that the company assesses the risk and sees AI as something that can enhance the game, rather than replace human conversation and relationships. “Our approach is to ensure that interactions remain safe, age-appropriate and constructive. To achieve this, our systems use intent recognition along with multiple layers of filtering to minimize the possibility of inappropriate or confusing responses, says Wu.We have implemented mechanisms such as anti-addiction design features and parental control tools to ensure healthy use in the family environment.”

Carissa Véliz at the University of Oxford, who works on AI ethics, says the technology represents a risk and an opportunity. “Most major language models do not seem safe enough to expose vulnerable populations to, and young children are one of the most vulnerable populations out there,” she says. “What’s particularly worrying is that we have no safety standards for them – no regulatory authority, no rules. That said, there are some exceptions that show that, with adequate precautions, you can have a safe tool.”

Véliz refers to a collaboration between the free e-book library Project Gutenberg and Empathy AI where, for example, you can chat with Alice from Alice in Wonderland. “The model never leaves the area of ​​the book, only answering questions about the book, like a storybook that only shares adventures and riddles from a book suitable for children,” she says. “There is something called safe AI, but most companies are not responsible enough to build a high-quality product, and without formal guardrails, it’s a buyer-beware area for consumers.”

Gibson says it’s too early to say what the risks of AI toys might be, or their potential benefits. She and Goodacre stress that generative AI-powered toys need stricter regulation so that toy manufacturers program their devices to promote social play and elicit appropriate emotional responses. AI makers should revoke access for toymakers who don’t act responsibly, Gibson says, and regulators should introduce rules to “ensure children’s psychological safety”. In the meantime, the couple suggests that parents allow children to use such toys only under supervision.

A spokesperson for OpenAI said so New scientist that “minors deserve strong protection and we have strict guidelines that all developers are required to adhere to. We currently do not work with any companies that have AI-powered toys for children on the market.” The UK government’s Department for Science, Innovation and Technology (DSIT) did not respond New Scientist questions about the regulation of artificial intelligence in children’s toys.

The UK government is currently considering other technology legislation designed to keep older children safe online. The UK’s Online Safety Act (OSA) came into force in July 2025, forcing websites to block children from viewing pornography and content deemed dangerous by the authorities. The legislation was meant to make the internet safer, but tech-savvy kids can easily circumvent the measures by using tools like virtual private networks (VPNs) to appear as if they’re surfing from other countries without strict rules.

Proposed changes to a new law introduced by the Department for Education to support children in care and improve the quality of education – the Children’s Wellbeing and Schools Bill – sought to ban children in the UK from using social media and VPNs. These changes have now been voted down, but the government has promised to consult on both issues at a later date.

Topics:

Add Comment