**An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account**
Joseph Thacker, a security researcher, had always been curious about the latest trends in AI toys for kids. So when his neighbor mentioned that she'd preordered a couple of stuffed dinosaur toys called Bondus, he couldn't resist taking a closer look. The toys offered an AI chat feature that allowed children to talk to them like a machine-learning-enabled imaginary friend. But Thacker knew that with great power comes great responsibility – and potential risks.
With the help of his web security researcher friend Joel Margolis, Thacker quickly discovered a shocking truth: Bondus' web-based portal, intended for parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, was also accessible to anyone with a Gmail account. By logging in with an arbitrary Google username, they were able to view sensitive information about every child who had ever interacted with the toy.
The data exposure was staggering – over 50,000 chat transcripts, including children's names, birth dates, family member names, "objectives" for each child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous conversation between the child and their Bondu. Thacker says it felt like an invasion of privacy to see all these conversations: "Being able to see all these conversations was a massive violation of children's privacy." The sensitive information included pet names kids had given their Bondu, likes and dislikes of the toys' toddler owners, and even favorite snacks and dance moves.
Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal. When WIRED reached out to the company for a statement, CEO Fateen Anam Rafid wrote: "Security fixes for the problem were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users." He added that the company found no evidence of access beyond the researchers involved.
But Thacker and Margolis warn that this is not just a one-off incident. They suspect that many AI toy companies are at risk due to their use of generative AI programming tools, which often lead to security flaws. "This is a perfect conflation of safety with security," says Thacker. "Does 'AI safety' even matter when all the data is exposed?" The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.
The implications are alarming. As Margolis points out, "This sort of sensitive information about a child's thoughts and feelings could be used for horrific forms of child abuse or manipulation." He adds: "To be blunt, this is a kidnapper's dream. We're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody."
Thacker and Margolis also discovered that Bondus uses Google's Gemini and OpenAI's GPT5, which means the company may be sharing information about kids' conversations with these companies. In an email response, Anam Rafid stated: "We take precautions to minimize what’s sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren’t used to train their models."
For Thacker, the experience has been a wake-up call. He had considered giving AI-enabled toys to his own kids but now thinks twice about it: "Do I really want this in my house? No, I don't. It's kind of just a privacy nightmare." Margolis adds that part of the risk of AI toy companies may be their reliance on AI in the coding of their products, tools, and web infrastructure.
Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy's conversations will raise inappropriate topics or even lead to self-harm. Bondu appears to have at least attempted to build safeguards into its chatbot. The company offers a $500 bounty for reports of "an inappropriate response" from the toy.
Yet, as Thacker and Margolis discovered, Bondu was simultaneously leaving all of its users' sensitive data entirely exposed. This raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. As Margolis says: "There are cascading privacy implications from this."