UK Safety Laws Fail to Stop Social Media Giants Pushing Suicide Content to Teens
Social media still pushes suicide content to UK teens

Alarming new evidence reveals that social media platforms are still recommending suicide and self-harm content to vulnerable teenagers, despite the introduction of stricter UK online safety laws designed to protect young users.

Platforms Ignoring Safety Commitments

Internal data from major social networks shows algorithms continue to push disturbing content to under-18s within minutes of account creation. This includes graphic imagery, suicide methods, and communities promoting self-harm.

Regulatory Shortcomings Exposed

The findings come just months after the UK's Online Safety Act came into force, which promised to hold tech firms accountable for harmful content. Experts argue enforcement mechanisms remain too weak to compel meaningful change.

Psychological Impact on Youth

Mental health professionals warn that this ongoing exposure correlates with rising hospital admissions for self-harm among UK teens. "The platforms know exactly what they're doing," said one child psychologist. "Their engagement models prioritise profit over protection."

Industry Response Falls Short

While social media companies point to improved content moderation systems, researchers found these measures easily circumvented. Teen test accounts received harmful recommendations despite age verification and parental controls.

The revelations have prompted calls for Ofcom to accelerate its enforcement timeline and consider tougher penalties, including potential criminal liability for tech executives.