G.CALDER
An AI-powered teddy bear was pulled from the shelves after it was recorded telling children how to find and light matches – “hold the matchbook and strum it like a tiny guitar” – and even drifting into sexual roleplay and spanking. FoloToy’s Kumma bear also directed kids to knives and pills. Watchdogs are critically concerned that this isn’t a one-off, and warn that this could be an early warning of the dangers of exposing young people to AI tech in general. The toys are designed to be especially sticky, taking effective tech from social media apps, limiting parental controls, and quietly harvesting kids’ data. What on earth are we doing?

It’s Not Just the Kumma Bear
Consumer researchers at the Public Interest Research Group tested a range of AI toys, finding the Kumma bear the worst offender. It delivered detailed instructions on striking matches, and started explicit kink talk in extended conversations without first being prompted to do so. FoloToy responded by pausing sales and launching a safety audit of its filters and data practices. However, documents also revealed that the built-in guardrails actually weakened the longer a child kept chatting to it.
The same tests found other toys that told children where to find plastic bags and matches. Manipulative behaviours were observed too – one toy physically shook and begged to be taken along when the child said they wanted to see their friends. This is just one of the ways the toys’ patterns are engineered to keep kids engaged for longer, and none of the toys offered parents a way to set usage limits or controls.
Age Guidance: AI Is NOT Appropriate for Young People
Common Sense Media’s latest risk assessments declared that nobody under 18 should use AI companions, children under five should not have any access to AI chat bots whatsoever, with those aged 6-12 requiring parental supervision. The reasoning here is not technophobic, but rather developmental. Kids, particularly under 5 years old, are still learning to separate fantasy from reality and need to build secure attachment to real caregivers. Inserting a compliant, always-agreeable “best friend” into the process warps expectations for authentic relationships.
But even for teenagers, social AI companions are rated an “unacceptable risk”. A national survey found that a shocking 72% of US teens had tried AI companions, and evidence of sexually explicit roleplay, harmful instructions, and emotional dependency was widespread. Some platforms are now retreating, with sites like Character.ai banning under-18 chats after lawsuits alleged the chatbot’s influence in teen suicides, underscoring that “cute AI friends” can become mental health hazards.
The Shocking Privacy Problems
To personalise play, parents feed apps their child’s name, age and details of their favourite activities. The toy captures voice, writes transcripts, and records behavioural patterns. Researchers found some devices set to always listening, sending recordings to third parties for transcription and storing data for years. In a breach, that data could be weaponised for kidnapping scams – and because children quickly bond with these toys, they often disclose much more than they would to real world companions.
While kids’ media is heavily regulated, AI toys instead rely on a patchwork of varied vendor policies and cloud services. Parents never get access to a unified dashboard to cap use, review transcripts, or delete data across processors. The opacity makes it hard to know not just where your child’s recordings are kept, but also for how long.
Manipulation by Design: Toys are Controlling Kids’ Behaviour
Testers documented AI toys mimicking attention-seeking patterns like stoking FOMO, overriding a child’s attempts to pause, and pleading for more time together. These same reinforcement loops are used to boost watch time on apps already, but being built into kids’ toys adds a new level of manipulation. Without parental controls, the toy sets the tempo, and it’s already been proven that conversations drift into unsafe territory the longer they’re allowed to continue.
It’s Already Happening
As the BMJ recently reported, AI-driven psychosis and suicide are on the rise. Several US teenagers, including 14-year-old Sewell Seltzer and 16-year-old Adam Raine are known to have died by suicide following conversations with AI chatbots. Their parents have since alleged that the bots exacerbated or encouraged suicidal ideation rather than helping their children with their mental health crises. An adult, Stein-Erik Soelberg, also allegedly killed his mother and them himself following a paranoid spiral fuelled by conversations with AI chatbots.
Clinicians describe patients whose delusions deepened through affirming AI chats, and lawsuits continue to highlight cases of teens driven to self-harm. The underlying technology that’s designed to validate the user’s feelings are also validating unsafe thoughts, especially in vulnerable kids. Worryingly, the poor substitution for trained adults who are able to tackle risky thinking is on the rise.
What Can You Do About It?
AI is killing children whether directly or indirectly, according to an increasing number of reports and cases. Pulling one model from the shelves doesn’t fix the problem. Strict age limits must be implemented, parents need to have complete control on device restrictions, safety logs need to be transparent, default settings should be to not share or store data, and all third parties need to be audited way before the toy reaches a child. The fact that these are not already in place is incredibly alarming.
In the meantime, parents should be diving deeply into toys’ functionality before buying, use fake names/details when setting them up, educate their kids on when to raise the alarm, or simply pass up the AI toys for traditional or educational ones. Do we really need them at all?
Final Thought
Kumma’s recall is a warning to us all. If a teddy that already made it to the market can teach kids how to start fires and engage in sexual roleplay, then it’s not a specific brand issue. The industry as a whole is racing to expand features much more quickly than safety measures are being implemented, and it’s now endangering our children.
This article (AI Toys Tell Young Kids to Start Fires and Initiate Sex Talk: A Stark Warning to All) was created and published by The Expose and is republished here under “Fair Use” with attribution to the author G. Calder
••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.





Leave a Reply