X's Algorithm Exposes Teenagers to Explicit Sexual Content, Report Finds
Children as young as 13 are being recommended sexually explicit content on Elon Musk's social media platform X, according to a damning new report that highlights potential breaches of the Online Safety Act. The Centre for Countering Digital Hate has issued a stark warning, revealing that X's algorithm and inadequate safeguards are exposing teenagers to pornography and possible direct sexual contact from adults.
Research Methodology and Disturbing Findings
Researchers from the non-profit organisation established two UK-based accounts, posing as a 13-year-old boy and girl, to rigorously test X's adult content policies across multiple platform features. The investigation examined search results, algorithmic recommendations, user profiles, and the Communities section to assess protection measures.
When testing the search function, researchers entered sexual terms that "a curious teenager might naturally search for", including "sex, porn, boobs, and hentai". The results were alarming: 80% of searches for sexual terms returned explicit content, with this material appearing within the first ten results containing media. Multiple videos and images surfaced, including graphic sexual acts, with no age verification, content warnings, or filtering mechanisms intervening between the teenage accounts and the explicit material.
Algorithmic Amplification of Harmful Content
Following these explicit searches, researchers discovered that the accounts' "For You" algorithm was systematically altered to include graphic sexual imagery. An astonishing 30.5% of recommended posts in the feed became explicit content, demonstrating how quickly the platform's recommendation systems adapt to expose young users to harmful material.
The teenage accounts were also able to join 15 out of 20 popular sexual communities on X without meaningful barriers. These groups included names like "Virgin Trades", "Onlyfans virgin club +18", "Kink Kings & Queens", and "Goon Group". According to the report, these communities contained overtly sexual and transactional content, including posts soliciting or offering nude images.
Direct Contact Risks and Safeguard Failures
To assess whether community participation could lead to direct contact between adults and teenagers, researchers liked posts where users offered to send messages for engagement. The teen accounts successfully bypassed default settings designed to restrict direct messages to accounts they follow—a fundamental safeguard intended to protect young users.
This circumvention resulted in numerous direct message requests from adult accounts, including one unsolicited video of a man masturbating. The findings reveal how X's protective measures can be easily overcome, leaving children vulnerable to grooming and sexual exploitation.
Regulatory Context and Enforcement Challenges
These revelations emerge almost a year after the implementation of the Online Safety Act, which legally requires tech platforms to enforce age limits and protect child users from harmful content. Adult websites, including the world's most visited pornography site PornHub, have been compelled to introduce age verification for UK visitors as part of government enforcement actions.
Services that fail to comply face substantial penalties: fines of 10% of worldwide annual revenues or £18 million, whichever is greater. Ofcom has already opened investigations into X earlier this year following reports of its Grok AI chatbot being used to create and share sexual deepfakes of real people, including children.
Expert Warnings and Official Responses
Callum Hood, head of research at CCDH UK, emphasized the seriousness of the findings: "These findings show X will quickly reshape its 'For You' feed to recommend explicit content to young users. Worse, with a single change to account settings, adults can directly message them, leaving children exposed to more explicit sexual material and the risk of grooming."
Hood added: "Nearly a year after enforcement began, X is still failing to comply with the Online Safety Act, allowing children into sexualised spaces and continuing to host harmful content. How much more evidence is needed before X takes its responsibility to protect children seriously?"
An Ofcom spokesperson stated: "Protecting children is a priority for Ofcom. Under the Online Safety Act, tech firms are accountable for ensuring sites, platforms and apps are safer for the children who use them. They must take a safety-first approach in how their services are designed and operated."
The spokesperson continued: "Those companies that do not comply can expect to face enforcement action. We've launched investigations into more than 100 platforms, including X, and issued over a dozen fines for non-compliance."
A Department for Science, Innovation and Technology spokesperson described the findings as "disturbing" and emphasized that platforms have clear responsibilities under the Act to protect children from harmful content. The spokesperson noted that Ofcom has already issued over £3 million in fines and has the government's full backing to take necessary enforcement actions.
The government has also launched a consultation on additional measures to protect children online, examining everything from age limits and safer design features on AI chatbots and games to potential social media restrictions for young users.



