
A disturbing new report has exposed how TikTok and Instagram continue to algorithmically recommend suicide and self-harm content to vulnerable users, operating at an 'industrial scale' eight years after the tragic death of Molly Russell.
Platforms Fail to Protect Users
The investigation found that despite repeated promises from tech companies to improve safety measures, their algorithms still actively push harmful content to users, particularly young people struggling with mental health issues.
Molly's Legacy Ignored
14-year-old Molly Russell took her own life in 2017 after viewing extensive self-harm content on social media. Her father Ian Russell has since become a leading campaigner for online safety reforms.
The report reveals:
- Platforms continue to recommend graphic self-harm content within minutes of account creation
- Algorithms create 'rabbit holes' that trap vulnerable users in cycles of harmful content
- Safety features are easily circumvented by using alternative hashtags and phrases
Tech Giants Under Fire
Mental health charities and campaigners have expressed outrage at the findings, accusing social media companies of 'putting profits before people's lives'.
A spokesperson for Meta (Instagram's parent company) stated they have 'invested heavily in safety measures', while TikTok emphasised their 'ongoing commitment to user wellbeing'. However, the report's authors argue these measures remain woefully inadequate.
Urgent Calls for Regulation
The findings have intensified demands for the UK government to fully implement the Online Safety Bill, which would hold tech companies legally accountable for harmful content on their platforms.