Elon Musk's social media platform X has announced a stringent new policy to combat the spread of artificial intelligence-generated misinformation during armed conflicts. The company will ban users from earning revenue on the platform if they repeatedly post unlabelled AI-generated war videos, following a torrent of fake battle scenes that flooded social media feeds at the start of the Iran conflict.
Strict Penalties for Misleading Content
The social media giant, which boasts approximately half a billion monthly active users worldwide, will implement immediate suspensions from its creator revenue sharing programme for violations. Users who post AI-generated videos depicting armed conflict without clear disclosure that the content was created using artificial intelligence will face a 90-day suspension from earning any revenue through the platform.
A second infraction will result in a permanent ban from the revenue programme, according to the announcement made on Tuesday night. This decisive action comes as the initial days of the Iran conflict were marked by an overwhelming volume of bogus online footage circulating across multiple social media platforms.
Widespread Circulation of Fake Battle Scenes
Timelines on X, alongside Instagram and Facebook operated by Meta, have carried numerous fabricated battle scenes that achieved massive reach despite being completely artificial. One particularly viral example showed Iranian rockets pursuing and shooting down a United States military jet, which according to verification checks by BBC Verify was viewed an astonishing 70 million times before being debunked.
Another widely circulated clip used sophisticated AI technology to replace genuine smoke rising from an actual missile strike site with a dramatically exaggerated fake fireball several times larger than the original. These deceptive videos have contributed to significant confusion about actual events on the ground during international conflicts.
Financial Incentives for Viral Content
The platform's advertising model creates substantial financial incentives for users to produce shocking viral posts. Users can earn hundreds of dollars monthly through X's revenue sharing programme if they build substantial followings approaching 100,000 people. This economic motivation has unfortunately encouraged some creators to prioritise engagement over accuracy, particularly during high-profile international events.
Nikita Bier, the head of product at X, emphasised the critical importance of authentic information during wartime. "During times of war, it is critical that people have access to authentic information on the ground," Bier stated. "With today's AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict – without adding a disclosure that it was made with AI – will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme."
Other Notable Examples of War Misinformation
The problem extends beyond X's platform, with other fake videos of the conflict achieving enormous reach across social media. One clip circulating extensively on Instagram purported to show a massive conflagration after "Iran destroyed the US airbase in Riyadh," but was actually identified as 18-month-old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah, Yemen.
Full Fact, the United Kingdom's leading independent fact-checking organisation, has reported observing artificial intelligence dramatically accelerating the spread of misinformation across social media platforms. Steve Nowottny, Full Fact's editor, highlighted the concerning scale of the problem: "In the last few days we've seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei."
Nowottny further explained that even when AI-generated images appear low quality or retain visible watermarks, they frequently achieve widespread distribution. "The sheer volume of this fake content and the ease with which it is generated and spreads is a real concern," he emphasised, underscoring the challenges facing social media platforms and fact-checking organisations alike.
Meta, which operates Instagram and Facebook, has been approached for comment regarding its policies on AI-generated war content but had not responded at the time of reporting. The broader technology industry continues to grapple with balancing innovation in artificial intelligence with responsible content moderation during sensitive global events.



