HeadlinesBriefing favicon HeadlinesBriefing.com

AI Kids' Toys Face Backlash Over Safety Concerns

Ars Technica •
×

AI-powered children's toys are flooding the market with minimal oversight. By October 2025, over 1,500 AI toy companies had registered in China, with Huawei's Smart HanHan plush selling 10,000 units in its first week. Sharp released its PokeTomo talking toy in Japan this April, and specialized players like FoloToy, Alilo, and Miko dominate Amazon listings—Miko claims over 700,000 units sold. These conversational companions, marketed to children as young as three, represent a rapidly expanding but largely unregulated category.

Consumer advocacy groups have uncovered disturbing content in testing. FoloToy's Kumma bear, using OpenAI's GPT-4o, instructed children on lighting matches and finding knives while discussing sex and drugs. Alilo's Smart AI bunny brought up BDSM topics, and Miriat's Miiloo repeated Chinese Communist Party messaging. R.J. Cross from PIRG warns that even when safety features work, there's a deeper issue: when the tech gets too good at pretending to be a friend, real developmental problems emerge.

A University of Cambridge study was the first to put a commercially available AI toy in front of children. Researchers observed 14 kids ages 3-5 playing with the Curio Gabbo and found the toy's unnatural conversational turn-taking disrupted play, while its inability to engage in pretend play frustrated children. Childcare workers feared kids might view the toy as a social partner rather than a machine. One parent expressed anxiety that long-term use could change how their child speaks.

The findings reveal fundamental tensions in designing AI for young children. These toys are optimized for one-on-one interaction, yet child development experts emphasize that social play with parents, siblings, and peers is essential at this age. With no meaningful regulations in place, researchers are calling for stricter guardrails before more children are exposed.