The digital landscape is being poisoned by a new, potent form of misinformation, with AI-generated 'rage-bait' videos deliberately engineered to provoke outrage and distort public debate. This manipulative tactic has become so pervasive that the Oxford University Press recently named 'rage bait' its phrase of the year, noting its usage has tripled in the past twelve months.
From Fake Classrooms to Viral Meltdowns: The Global Rage-Farming Industry
Earlier in December, a grainy, CCTV-style video purporting to show British schoolchildren being led in Muslim prayer circulated widely on social media. Seized upon by right-wing accounts, it fed directly into heated national conversations on immigration. The clip, however, was a fabrication, originating from a shadowy Facebook page dedicated to posting streams of AI-generated content designed to stoke division.
This is not an isolated UK phenomenon but part of a global, profit-driven industry. An investigation by The Times traced one operation to Sri Lanka, where a creator known as Geeth Sooriyapura reportedly makes hundreds of thousands of dollars by spreading falsehoods and training others to do the same. His strategy explicitly targets the feeds of older demographics to maximise engagement and, consequently, ad revenue.
In the United States, when SNAP food benefit payments were delayed during a government shutdown, social media flooded with viral videos of beneficiaries having 'public meltdowns' in stores. Many were AI fakes, yet they were amplified by political influencers and even picked up by mainstream news outlets like Fox News, which initially reported them as real.
Blurred Lines and Bad Faith: Why It's So Hard to Stop
The technology behind these videos, such as OpenAI's Sora and Google's Veo, is now frighteningly accessible. A user can generate a convincing clip from a simple text prompt. While outputs contain watermarks, these are easily removed, leaving a seamless fake ready to be unleashed online.
Jeremy Carrasco, who runs the TikTok account ShowtoolsAI to debunk misinformation, explains the shift. He has observed accounts that once posted calming AI-generated ASMR content pivot to politically charged rage bait. "There is little to no regulation unless it crosses rules like harassment," he notes. "Since most of the videos deal with people who aren't real, no one is directly being harmed." This legal and ethical grey area allows the practice to thrive.
Carrasco uses the fake UK classroom video as a case study, pointing out tell-tale signs like non-existent furniture and an inaccurate map on the wall. However, such scrutiny is rare among viewers scrolling quickly through their feeds.
A Post-Truth Perfect Storm: Virality Over Veracity
This crisis is a perfect storm of advanced AI and a social media ecosystem that has long prioritised virality over truth. The algorithmic shift towards 'For You' feeds around 2009 made rage and engagement inseparable. Today, AI acts as an automatic rage-bait machine, manufacturing convincing scenarios that confirm existing biases.
Terrifyingly, for a significant number of people, the authenticity of the content is secondary. As one social media influencer commented on a debunked, racist AI video: "I think it's AI… but if it's not, those are your seven choices, not mine." The sentiment alone, real or fabricated, is enough to justify and amplify anger.
The result is a dangerously distorted public sphere where artificially inflamed emotions shape political discourse, undermine trust, and deepen societal divisions, often without the public even realising they have been manipulated.