State Actors Amplify AI-Generated Misinformation in Iran War
A deluge of misrepresented or fabricated videos has spread widely online since the Iran war commenced last weekend, significantly fueled by state-linked propaganda and influence campaigns. These efforts primarily focus on distorting perceptions of who is winning the conflict and the extent of casualties, creating a chaotic information environment.
Fabricated Videos and State-Linked Campaigns
As attacks escalated following the bombing of Iran by U.S. and Israeli forces, a video circulated widely showing crowds gazing at fire, smoke, and debris from a high-rise building purportedly in Bahrain. Social media users falsely claimed it depicted an Iranian attack on the skyscraper. While buildings in Bahrain have indeed been struck by Iranian missiles during the war, this particular video was not authentic. It was generated using artificial intelligence and disseminated by accounts associated with the Iranian government, aiming to exaggerate military successes.
Multiple clues reveal the video's inauthenticity, such as two cars on the left side appearing fused together and a man in the bottom-right corner whose elbow seems to pass through a backpack. This instance highlights a broader trend where state actors produce targeted content with clear narrative structures, using videos to bolster specific statements about the conflict and geopolitical dynamics.
Expert Insights on Information Operations
Melanie Smith, senior director of policy and research on information operations at the Institute for Strategic Dialogue, noted, "The content that's coming from state actors tends to be a little better targeted. They have a very clear kind of narrative structure, and the videos are just used to support some kind of statement they want to make about the conflict and about the kind of geopolitical situation writ large."
Pro-Iran social media accounts have promoted narratives that overstate destruction and death tolls, aligning with reports from Iranian state media. This has resulted in numerous AI-generated videos of supposed air strikes, like the Bahrain high-rise example. Additionally, an ongoing Russia-aligned influence operation, known as Operation Overload or Matryoshka, has shared videos impersonating intelligence agencies and news outlets to undermine public safety and influence behavior, a tactic previously used during election cycles.
Censorship and the Information Vacuum
Misrepresented and fabricated videos have been prevalent in recent conflicts such as the Russia-Ukraine and Israel-Hamas wars. However, experts point out a key difference now: the lack of information from the Iranian public due to internet shutdowns and widespread censorship. This absence of perspectives could have both supported and countered the Iranian government's narrative.
Todd Helmus, a senior behavioral scientist at RAND specializing in irregular warfare, terrorism, and information operations, explained, "In Ukraine, that message was so full-throated it really changed the entire dynamic of the conflict because the world really aligned with the perspective of Ukrainians facing the attacks and showing resilience in light of the attacks, but we're sort of missing that story from Iran."
Opportunistic Users and AI's Role
Beyond state actors, opportunistic social media users unaffiliated with governments have heavily contributed to misinformation in the early days of the Iran war. They have presented old footage from other conflicts as recent, shared video game clips as real, and posted their own AI-generated content. Artificial intelligence, in particular, has exacerbated misinformation in ways unimaginable just a few years ago, compounding with state-linked disinformation and censorship to create a vast vacuum where truth becomes elusive.
Melanie Smith warned, "The volume of AI content is starting to just pollute the information environment in these kinds of crisis settings to a really terrifying degree. The inability to get access to verified and credible information in times like this — it's getting harder and harder to do that."
Platform Responses and User Awareness
In response, Nikita Bier, X's head of product, announced in a Tuesday post that the platform will suspend users from its revenue-sharing program if they post AI-generated content from an armed conflict without proper disclosure. Penalties include a 90-day suspension for first offenses and permanent bans thereafter.
Emerson Brooking, director of strategy and resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, emphasized that social media platforms are now frontlines in warfare. He urged users to recognize their potential exploitation by state actors, regardless of geographical distance from the conflict. "If you're in these spaces, just understand that this is an extension of the physical battle space. That there are actors on all sides of the conflict that are actively trying to spread propaganda and disinformation to convince you that certain things are true that aren't. That your eyeballs and your attention are an asset," he stated.
