
Instagram is systematically promoting dangerous eating disorder content to teenagers and young users through its powerful recommendation algorithm, a shocking investigation has uncovered.
Algorithm Amplifies Harmful Content
Despite public commitments from parent company Meta to improve safety measures, the platform continues to serve potentially triggering material to vulnerable users. Researchers found that simply viewing a single post about body image issues can flood a user's feed with extreme dieting content and pro-eating disorder communities.
Platform Fails to Enforce Own Policies
Meta's internal policies explicitly prohibit content that promotes or glorifies eating disorders. However, the investigation reveals massive gaps in enforcement, with thousands of accounts and hashtags circumventing detection through coded language and subtle imagery.
Impact on Young Users
Mental health charities report alarming increases in young people seeking help after encountering harmful content on the platform. "The algorithmic amplification of this content creates a dangerous feedback loop that can seriously undermine recovery efforts," explains one leading psychologist.
Meta's Response Questioned
While Meta points to its investment in AI detection tools and human moderators, critics argue the company prioritizes engagement over user safety. The investigation found that:
- Recommended content often bypasses existing safeguards
- Reporting mechanisms frequently fail to remove violating content
- Recovery communities are increasingly targeted with harmful suggestions
This latest revelation adds to growing pressure on social media giants to fundamentally rethink how their algorithms prioritize user wellbeing over engagement metrics.