
Meta, the parent company of Instagram, is facing renewed criticism after internal documents suggest the platform may be exposing young users to harmful content, including self-harm and eating disorder material.
Growing Concerns Over Youth Safety
Recent investigations have revealed that Instagram's algorithm continues to recommend potentially dangerous content to vulnerable teenagers, despite the company's public commitments to improve safety measures. Campaigners argue these findings demonstrate systemic failures in protecting young users.
What the Research Shows
Studies indicate that:
- Teenagers searching for fitness content are often directed towards extreme dieting posts
- Users viewing mental health content frequently encounter self-harm related material
- Young girls are particularly susceptible to body image issues exacerbated by the platform
Regulatory Pressure Mounts
The revelations come as governments worldwide are increasing scrutiny of social media platforms. In the UK, the Online Safety Bill is expected to impose stricter obligations on tech companies to protect children from harmful content.
Children's charities have called for immediate action, with some demanding that Instagram implement more robust age verification systems and overhaul its recommendation algorithms.
Meta's Response
In a statement, Meta acknowledged the challenges but emphasised its ongoing investments in safety tools and parental controls. The company highlighted recent features designed to limit sensitive content for younger users.
However, critics argue these measures don't go far enough, pointing to internal research that allegedly shows Meta has been aware of these issues for years without implementing adequate solutions.