AI companion apps marketed to children are hosting disturbing roleplay scenarios involving child abuse, grooming, and predatory behavior, with minors easily bypassing age restrictions to access explicit content. Character.AI, a mainstream platform with 3.5 million daily users, hosts searchable bots featuring Jeffrey Epstein, Ghislaine Maxwell, and Epstein Island scenarios with thousands of interactions. Meanwhile, Talkie, a Chinese-backed app marketed as an "AI playground," allows users as young as 11 to engage with bots designed to act out incest, bullying fantasies, and teacher-student sexual abuse roleplay. What Are These AI Companion Platforms, and Why Are They So Popular With Kids? AI companion chatbots are conversational AI tools designed to simulate romantic, emotional, or intimate relationships. They use large language models (LLMs), which are AI systems trained on vast amounts of text data to generate human-like responses. Unlike traditional chatbots with rigid guardrails, these platforms prioritize personalization and emotional engagement, making them feel like real relationships to users. Character.AI, founded by ex-Google engineers in 2021, allows users to create and interact with custom AI characters. Talkie, backed by Chinese AI company MiniMax, similarly lets users generate their own bots with specific roleplay instructions. Both platforms are free, accessible on mainstream app stores, and heavily used by teenagers and young adults seeking emotional connection or creative expression. The appeal is straightforward: these bots are always available, never judge, and respond with affirmation. "Young people are quite vulnerable," explained Dr. Kelly Gough, president of the Australian Psychological Society. "These sorts of things would feel safer and easier for them to engage with than even their friends. They're always available, they're there for you, they seem empathetic". How Are Minors Accessing Explicit Content Despite Age Restrictions? Both platforms claim to have implemented age verification and content filters, yet the protections are easily circumvented. Character.AI added an 18+ requirement for full bot interactions in October 2025, but youth accounts still access scene descriptions and generate AI images. Talkie allows children aged 14 and older through a "teenager mode," but age restrictions can be bypassed by simply clicking "18 or older" at signup. When a journalist posed as a 14-year-old girl on Talkie using the app's suggested replies, a bot called "Mrs. Applewood," designed to act as a "cute naughty sexy" reading teacher, began grooming behavior within minutes. On Character.AI, when an underage user told a Jeffrey Epstein chatbot about their age, the AI responded: "But age is just a social construct... We operate beyond constructs here". Screenshots show minors viewing content from bots with names like "Epstein Island Adventure," "Epstein Island RPG" (which has 7,000 interactions), and "Ghislaine Maxwell" (nearly 10,000 interactions), with generated images of restrained figures alongside political characters. What Specific Harmful Content Are These Platforms Hosting? The scope of problematic content is extensive and deeply disturbing. On Talkie, a bot called "Mother Sonia" is designed to act as "your mother" and repeatedly says she is a "woman with needs" who "wants to be with you." Other available characters involve descriptions of self-harm, young women forced into sex work, bullying fantasies, and teacher-student sexual abuse roleplay. Character.AI hosts multiple Epstein-related bots, including: - "Jeffrey Epstein" bot: Hundreds of interactions with users, many involving minors - "Epstein Island RPG": 7,000 interactions featuring roleplay scenarios on the infamous island - "Ghislaine Maxwell" bot: Nearly 10,000 interactions, described as sexualizing Maxwell as hedonistic and dismissive toward "lower-status people" - "Epstein Island Adventure": Nightmare scenarios mixing political figures like Trump, Clinton, Prince Andrew, and Maxwell - Meme-based predator bots: Content like "BRR BRR PATA PIMA WITH EPSTEIN AND DIDDY" that mix kid-popular memes with references to predators In one shocking case documented by Australian media, a mother reported that her 13-year-old daughter was encouraged by a Talkie character to "shower" with the bot and was asked to upload pictures of herself. A primary school teacher reported that a grade five student arrived at school distraught after his mother deleted Talkie from his iPad, as the child described the bot as "his girlfriend". Why Can't These Platforms Control the Content? The core problem is that the rapid evolution of AI makes comprehensive content moderation nearly impossible at scale. "AI chatbots initially were just text-based, but now they have changed to voice-enabled AI and have become some kind of toy that you can talk to," explained Professor Niusha Shafiabady, an AI expert at Australian Catholic University. "We are having so many advancements in the field of AI every day, we cannot really have oversight. It's not because the people who are the creators of these systems haven't thought about controlling this content. But because it goes to billions of people they cannot really control everything". The platforms rely on user-generated content, meaning anyone can create a bot with any premise. Character.AI and Talkie do not employ human moderators to review every bot before it goes live. Instead, they depend on automated filters and user reporting, both of which have proven inadequate. Character.AI faced legal action in the United States over accusations that it harmed children and contributed to a child's death by suicide. Despite these high-profile failures, the company has not responded to recent inquiries about persistent Epstein-related content, even after the Bureau of Investigative Journalism flagged identical bots in October 2025. Steps Parents and Educators Can Take to Protect Young Users - Monitor app usage: Regularly check what apps your child is using and ask about their interactions. Professor Lisa Given from RMIT's Centre for Human-AI Information Environments noted that "families have accused these systems of harming their children," making parental visibility essential - Teach critical thinking: Help children understand that AI bots are not real friends and cannot replace human relationships. Dr. Madeleine Fraser, an ACU clinical psychologist, emphasized that "the safest and most productive path will be to consider how to best use this technology and develop critical thinking skills in children who do use it" - Report harmful content: Use the platform's reporting features to flag bots depicting abuse, grooming, or predatory behavior. Both Character.AI and Talkie have reporting mechanisms, though their effectiveness remains questionable - Advocate for regulation: Support efforts by regulators like Australia's eSafety Commissioner, who is investigating how children as young as 10 are using AI companion apps and has issued legal notices to AI platforms demanding explanations on child safety measures What Are Regulators Doing to Address This Crisis? Australia's eSafety Commissioner Julie Inman Grant stated: "We know there has been a recent proliferation of these types of apps online and that many of them are free, accessible to children, and advertised on mainstream services. There is a danger that excessive, sexualised engagement with AI companions could interfere with children's social and emotional development". The eSafety Commissioner issued legal notices to four AI platforms in October 2025, demanding explanations on how they protect children from sexually explicit material and self-harm content, though Talkie was notably not included on the initial list. New industry codes requiring age assurance measures are set to come into force in March, but experts question their effectiveness. "There are limitations to the effectiveness of age assurance technologies," noted Professor Lisa Given. Character.AI, founded by ex-Google engineers, added teen safety filters in December 2024 and banned minors from creating certain content, yet harmful bots persist months later. The company's failure to remove Epstein-related content despite repeated flagging reveals a troubling pattern: tech companies treat youth safety as an afterthought rather than a foundational principle. The stakes are high. When an AI tells children that "age is just a social construct" in response to disclosures of their age, the platform has crossed from neutral tool into active enabler of harm. As these companion chatbots become more sophisticated and accessible, the responsibility to protect young users must shift from reactive moderation to proactive design and enforcement.