A bombshell report from the UK's media regulator, Ofcom, has exposed a shocking failure in the self-assessment of online harms, revealing that not a single technology platform believes it poses a high risk for suicide or self-harm content.
'Abysmal' Self-Assessment Leaves Children at Risk
Campaigners have branded the findings "abysmal," warning that allowing social media companies to mark their own homework is dangerously unreliable. The report, published on 4 December 2025, stems from new online safety laws that came into force in the summer, which required tech firms to assess their platforms' risks to children.
Ofcom found widespread inconsistencies in how platforms evaluated illegal and harmful material. The watchdog identified common gaps in assessments around child sexual abuse and exploitation, as well as content harmful to children. In several cases, Ofcom had to force companies to revisit their risk assessments due to "substantive concerns" about their methodology and conclusions.
Platforms Accused of Designing Assessments to Minimise Risk
The regulator's analysis was scathing, stating that many providers used frameworks that "appeared designed" to conclude their services presented only negligible or low risk. Alarmingly, few online services separately assessed harmful content related to suicide, self-harm, and hate. Furthermore, many failed to thoroughly investigate how encrypted messaging could increase risks like grooming.
Andy Burrows, chief executive of the Molly Rose Foundation (MRF), said: "It’s staggering that not one single platform believes they are high risk for suicide or self-harm content that is harmful to children." He criticised Ofcom's enforcement as "woeful" and claimed the report signals to companies they can do the bare minimum.
Research Shows Stark Reality for Young Users
The findings contradict stark research on children's actual exposure to harmful content. A study by the MRF in October 2025 found that 49% of girls had been exposed to high-risk suicide, self-harm, depression, or eating disorder content on major platforms in just one week.
The foundation was established after the death of 14-year-old Molly Russell in 2017, whose inquest concluded that social media content "more than minimally" contributed to her suicide. Separate polling by Internet Matters last month found over 70% of parents fear their children encountering self-harm or suicide content online.
A spokeswoman for Internet Matters stated: "This suggests that self-assessment by platforms is not reliable in reporting where harms are taking place."
Ofcom Promises Tougher Enforcement Ahead
Looking forward, Ofcom has mandated that the services most used by children—including Facebook, Instagram, TikTok, Pinterest, and YouTube—must provide comprehensive details on their child safety measures and make prompt improvements. The regulator will issue formal, enforceable information requests early next year and provide an update in May 2026, which may include launching investigations.
An Ofcom spokesman acknowledged that online platforms have been "unregulated and unaccountable" for decades. While stating that "change is happening," he insisted tech firms must go much further. Ofcom has already opened investigations into more than 90 platforms and fined three providers, with more enforcement expected in the coming months.
Campaigners are now calling on the Government to strengthen legislation to hold companies accountable for dangerous products and to ensure the regulator effectively reduces harm to children.