AI 'Rage-Bait' Videos Twist Emotions: Oxford's 2025 Word of the Year Exposed
AI 'Rage-Bait' Named Oxford's 2025 Word of the Year

The era of AI-generated 'rage-bait' has officially arrived, with the manipulative tactic named the Oxford University Press Word of the Year for 2025. This marks a disturbing new chapter where artificial intelligence is being weaponised to distort public discourse, provoke outrage, and twist emotions, often without viewers realising the content is fabricated.

The Global Rise of Manufactured Outrage

Earlier in December 2025, a grainy, CCTV-style video began circulating online. It purported to show a British classroom where children were being led by their teacher in a Muslim prayer, repeating "Allahu Akbar". The clip was swiftly picked up by prominent right-wing social media accounts, perfectly timed to coincide with heated debates on immigration and asylum following proposals from Labour's Shabana Mahmood.

The video was entirely fake, generated by artificial intelligence. It originated from an obscure Facebook page offering an "unfiltered perspective" of the UK, which regularly posts streams of AI-generated content: fake protesters, fabricated man-on-the-street interviews spouting anti-immigration views, and other inflammatory material. The page's owner remains unknown, highlighting how anyone, anywhere, can now fuel division.

This is not an isolated UK phenomenon. As the US Congress faced an unprecedented shutdown at the end of October 2025, delaying vital SNAP food assistance payments for 42 million Americans, social media flooded with viral videos. They showed furious people screaming about the delays, shoppers having "public meltdowns" at checkouts, and individuals collapsing in fits of rage. Many were AI fabrications, yet they were shared by political influencers and even reported by mainstream news outlets as real.

The Lucrative Business of Digital Hate Farming

The driving force behind this trend is starkly financial. An investigation by The Times exposed a so-called "King of Facebook Ads," Geeth Sooriyapura in Sri Lanka, who reportedly makes hundreds of thousands of dollars by spreading falsehoods. His claims include assertions that council houses are reserved solely for Muslims or that the Labour Party is owned and run by Islamists.

Sooriyapura even runs an academy training others to set up similar pages, advising them to target "old people … because they are the ones who don’t like immigrants." The model is simple: more clicks drive higher engagement, which in turn generates significant ad revenue. He claims to have earned over $300,000 from this practice of "rage-farming."

On Instagram, one creator with 6,000 followers—using a pseudonym and a profile picture of an American flag with Nicki Minaj—posted a fake video created with OpenAI's Sora tool. It depicted an overweight white woman arguing with store staff over SNAP cuts. The video amassed 600,000 views on Instagram, with another version reaching 4.4 million. The comment sections filled with vitriol, validating viewers' pre-existing biases, with few questioning the clip's authenticity.

How to Spot an AI Fake

Content creator Jeremy Carrasco, who runs the TikTok account ShowtoolsAI, specialises in identifying AI misinformation. He uses the fake UK classroom video as a case study, pointing out tell-tale signs:

  • The security footage aesthetic is suspicious, as it doesn't need high visual fidelity, making flaws easier to hide.
  • The teacher sits on a chair that doesn't logically exist in the scene.
  • When she kneels down, there is a strange void behind her.
  • The map on the classroom wall is inaccurate.

Carrasco notes that while platforms like Sora and Google's Veo have safeguards against creating deepfakes of real people, generating entirely fake worlds with politically charged scenarios is a regulatory grey area. "There is little to no regulation unless it crosses rules like harassment," he explains. "Since most of the videos deal with people who aren't real, no one is directly being harmed."

A Post-Truth Problem for the Algorithmic Age

The underlying issue extends beyond AI technology to the very architecture of social media. The pivot towards algorithmically curated "For You" feeds, which began around 2009, made virality a primary metric. This created an ideal environment for rage bait and propaganda to flourish. Today, AI tools like Sora act as an automatic rage-bait machine, blurring the line between reality and fabrication more convincingly than ever before.

Videos are uploaded at a pace impossible for platforms to moderate effectively. Some carry warnings; many do not. Once viral, the damage is done. Perhaps most chillingly, for some, the truth is irrelevant. Commenting on a racist AI video that was later taken down, influencer Nikko Ortiz told his 3.5 million subscribers: "I think it’s AI… but if it’s not, those are your seven choices, not mine."

The terrifying conclusion is that in our post-truth era, it no longer matters if something is real. If it looks real and feels real, that is reason enough to justify rage for far too many people. The naming of "rage-bait" as the word of the year is a stark warning about the emotional manipulation now embedded in our digital lives.