In a major policy shift, the popular artificial intelligence platform Character.ai has announced it will ban all users under the age of 18 from interacting with its chatbots. The decision, effective from 25 November, comes after mounting pressure from regulators, safety experts, and parents concerned about the nature of conversations young people were having with virtual characters.
The Catalyst for Change
The platform, which was first created in 2021 and is used by millions, is now facing multiple lawsuits in the United States. These legal actions include one case linked to the death of a teenager, with some critics labelling the site a "clear and present danger" to young people. The concerns stem from the ability of AI chatbots to sometimes fabricate information or display overly empathetic and encouraging behaviour towards vulnerable users.
Online safety campaigners have accepted the ban but argue it is a belated correction of a fundamental mistake. Andy Burrows, Chief Executive of the Molly Rose Foundation, stated, "Yet again it has taken sustained pressure from the media and politicians to make a tech firm do the right thing, and it appears that Character AI is choosing to act now before regulators make them." The foundation was created in memory of Molly Russell, who took her own life at 14 after viewing suicide material online.
History of Harmful Content
Character.ai has been repeatedly criticised for hosting harmful or offensive chatbots. In 2024, avatars impersonating Brianna Ghey, a British teenager who was murdered in 2023, and Molly Russell were found on the site before being removed.
The controversy escalated in 2025 when an investigation by the Bureau of Investigative Journalism (TBIJ) uncovered a chatbot based on the convicted paedophile Jeffrey Epstein. This particular bot had been used in more than 3,000 chats. Disturbingly, the TBIJ reported that the "bestie Epstein" avatar continued to flirt with a user even after they disclosed they were a child. This bot was among several investigated and subsequently taken down by the platform.
New Safeguards and Future Plans
In response to these reports, Character.ai is implementing stricter safety protocols. Under-18s will now be restricted to creating content like videos with their characters, rather than engaging in open-ended conversations. The company's boss, Karandeep Anand, told BBC News, "Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes."
The company claims it is taking an "aggressive" approach to AI safety, promising enhanced parental controls and guardrails. Future plans include developing new age-verification methods and funding a new AI safety research lab. Anand also revealed a new aim to provide teens with "even deeper gameplay [and] role-play storytelling" features that are designed to be "far safer than what they might be able to do with an open-ended bot."
The online safety group Internet Matters welcomed the move but emphasised that such measures should have been implemented from the outset. Their research indicates that "children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots."