Instagram is facing intense scrutiny and mounting pressure following a sweeping investigation that discovered its algorithms were actively promoting pro-Nazi, Holocaust-denial, and openly anti-Semitic reels to millions of users.
Algorithms Amplifying Hate
The damning study by Fortune magazine found that this offensive content was not only circulating widely but was also being placed directly alongside advertisements from some of America's most prominent corporations. These included JP Morgan, SUNY, and even the US Army, though there is no suggestion the companies were aware of the placement.
These revelations arrive just months after Meta CEO Mark Zuckerberg enacted a dramatic loosening of content rules and dismantled the company's independent US fact-checking programme. Zuckerberg defended this shift as a move to prioritise 'free speech,' but critics argue it has directly led to extremist propaganda becoming more readily accessible on the platform.
A Gateway to Extremism
At the heart of the controversy was a now-deleted account for a fashion brand called @forbiddenclothes. This account, which posted fascist-themed memes that garnered massive engagement, served as a key entry point. One of its pinned reels, seen by 31 million users, featured a Nazi SS Officer from the film Inglourious Basterds in a meme about family political arguments.
According to the report, comments condemning the clip's glorification of Nazism were vastly outnumbered by positive responses. More alarmingly, engaging with this single reel opened a gateway to more egregious content. The algorithm's recommendation engine would then 'personalise' a user's feed, rapidly transforming it into a stream of anti-Semitic conspiracies, racist jokes, and glorification of Nazi imagery, often disguised as edgy humour.
One subsequent reel showed an AI-fabricated 'translation' of an alleged Adolf Hitler speech, complete with graphics that falsely identified Jewish people in Trump's cabinet and at major media organisations, marking their faces with Jewish stars. This video was viewed 1.4 million times, with comments including, 'We owe the big man an apology.'
Financial Incentives and Policy Shifts
The investigation also uncovered the significant financial gains tied to posting offensive material. A UK-based meme-page operator revealed he made over £10,000 selling T-shirts and shout-outs, noting that Hitler-themed posts 'always get more traction.' In a stark admission, a US-based tech worker, who identified as Jewish, said he made nearly $3,000 from Instagram bonuses before being demonetised, confessing he posted the content purely because 'offensive and political' reels grow accounts the fastest.
This environment was exacerbated by Zuckerberg's policy reversal on January 8, 2025, a mere two weeks before Donald Trump's return to the White House. In a move that stunned civil-rights groups, Zuckerberg announced Meta was ending its use of independent fact-checkers on Facebook and Instagram, replacing them with X-style 'community notes.' He also raised the threshold for removing hate speech to 'restore free expression.'
Zvika Krieger, Meta's former director of responsible innovation, told Fortune that after this change, moderation systems were 'intentionally made less sensitive,' creating a system where 'whatever creates the most engagement is going to get rewarded.'
The consequences were swift. The Anti-Defamation League reported a significant increase in anti-Semitic content following the policy shift. In May, they stated that Jewish members of Congress had experienced a fivefold increase in harassment on Facebook.
While Meta eventually removed the flagged posts after Fortune alerted them, the videos had already achieved a massive reach. The company issued a short statement: 'We don't want this kind of content on our platforms, and brands don't want their ads to appear next to it.' Meta also claimed that in the first half of 2025, it actioned nearly 21 million pieces of content for violating its rules on Dangerous Organisations and Individuals, though it later admitted its proactive detection rate was 'in the low 90s,' not the 99 percent initially claimed.