Most major tech companies have age restrictions on their powerful chatbots, but that hasn’t stopped some toy companies from claiming to use OpenAI and Google to power their products.
A report released by the consumer watchdog on Tuesday found that more than two dozen toys advertised online were sold as being powered by leading AI models, despite restrictions preventing children from using them.
A report by US public interest research group Education Fund (PIRG) said toy companies found gaps in AI companies’ policies regarding age restrictions. While young people are prohibited from using such models and their chatbots, developers — the people and companies that build AI models — generally don’t face similar restrictions.
PIRG said it will be able to sign up for developer access to AI models from Google, OpenAI and xAI, and faces “no substantive vetting” about whether it will target its services to children. Anthropic asked PIRG if it plans to build a product for minors.
On Google’s, Anthropic’s and OpenAI’s developer platforms, PIRG was able to build a system designed to act as an AI-powered teddy bear for children.
“You have AI companies whose models themselves are not for kids,” RJ Cross, lead author of the report and a researcher at PIRG, told NBC News. “But they allow third-party developers to use them in toys and are very hands-off about the question of safety.”
In response to a request for comment, an OpenAI spokesperson wrote in a statement: “Minors deserve strong protections and we have strict policies that all developers must uphold. We will take enforcement action against developers when we determine that they have violated our policies that prohibit any use of our services to exploit, endanger, or sexualize those under 18.”
“These rules apply to every developer who uses our API, and we conduct safeguards to ensure that our services are not used to harm minors,” the spokesperson wrote, referring to the application programming interfaces (APIs) that developers use to interact with the companies’ services.
An Anthropic spokesperson told NBC News that users of its AI systems must be over 18 because young people are at a higher risk of experiencing negative outcomes when interacting with chatbots. The spokesperson emphasized that while developers must use age-appropriate guardrails and tell users their product is powered by AI, developers must follow Anthropic’s Acceptable Use Policy, which prohibits many types of dangerous or harmful behavior.
Google and xAI did not respond to requests for comment.
The AI boom has created a new market for a variety of products filled with leading chatbots as tech companies compete to attract developers. A wave of AI toys hit shelves last holiday season, but experts warn — and an NBC News investigation showed — that they present a variety of safety concerns.

Today’s AI toys rely on a handful of tech companies for their interactive features. But instead of building AI into toys, most use the Internet to transmit data to AI companies, which then send responses to the toys.
Concerns about the use of AI chatbots by minors have spurred action by tech companies, many of which have placed restrictions on the age of their users.
OpenAI says its flagship system, ChatGPT, is “intended for ages 13 and up,” and it has built a version for people under 18 that treats sensitive topics differently.
Google says users must be over 13 to use its Gemini AI products. Google has strong restrictions that prevent organizations from using its products in any service or business that is “directed at or accessible to persons under the age of 18.”
PIRG identified 20 unique toys being sold online that claimed to use OpenAI’s systems, while five toys claimed to use Google’s systems — a direct violation of Google’s terms of service regarding targeting children. However, some toys misspelled OpenAI’s product names or claimed to use OpenAI and Google systems, casting doubt on the accuracy of the toy maker’s claims.
Assuming the toymakers’ claims are valid, Cross said, the lack of oversight raises questions about the companies’ ability to track how developers and third parties are using their systems.
“It doesn’t make a ton of sense that AI companies that don’t release child-safe versions of their models would let anyone with a credit card sign up to make a product for kids using the same technology,” Cross said. “It doesn’t make a lot of sense to have AI companies outsource to developers who haven’t tested child safety.”
PIRG identified toys that claim to be at least partially powered by AI services from Anthropic and xAI. Anthropic’s terms of service require organizations to agree to additional warnings about making their products available to users under 18, but those additional guidelines never appear if developers identify themselves as “individuals” using Anthropic’s services instead of “organizations,” NBC News found. While xAI’s consumer terms prohibit users under the age of 13, the same language does not appear in the terms of use for enterprise users, which use xAI for “business purposes.”
Most major AI companies monitor submissions and requests to their services, and their terms of service include provisions that allow users to be banned if they violate their policies.
Rachel Franz, director of the Young Children Thrive Offline program at the child advocacy group FairPlay, told NBC News that lax rules for developers undermine basic rules that protect children from harmful AI-generated content.
“No wonder it’s a no ‘who’s first?’ A discussion between AI companies and corporations embedding AI into children’s products,” Franz said in written comments. “Both have a long history of avoiding accountability and harming children for profit.”
“To really keep kids safe,” Franz continued, “AI companies need to make sure their models aren’t used in children’s products, through better vetting and accountability for the companies that use them.”






