Fake Iran War Missile Strikes and Drone Attacks Surge on Social Media Platforms
Technology Secretary Liz Kendall has declared she is "deeply concerned" about the proliferation of false information online regarding the Iran war, amid a significant surge in distorted and manipulated content across social media. The Cabinet minister acknowledged that the Government must examine what additional measures can be implemented "during crisis moments" to ensure accurate information dissemination.
Cross-Government Efforts Underway to Tackle Misinformation
According to exclusive insights, cross-government work is actively progressing behind the scenes to address this pressing issue. Ms Kendall emphasised to The Mirror that misinformation and disrepresentation represent a genuine concern shared by MPs across all political affiliations. She stressed the necessity for closer scrutiny of actions that can be taken, particularly in times of crisis, to prevent the spread of incorrect information online.
Expert Analysis Reveals Alarming Scale of Synthetic Content
Timothy Graham, a digital media specialist at the Queensland University of Technology, described the magnitude of misinformation related to the Iran war as "truly alarming". He detailed a full spectrum of synthetic and manipulated content, ranging from AI-generated videos of fabricated missile strikes and simulated drone attacks to repurposed footage from other conflicts presented as current events.
Mr Graham criticised Elon Musk's X platform, asserting that its structure inherently rewards emotionally charged and shareable content. He highlighted that posts depicting fake missile strikes achieving millions of views within hours are not anomalies but rather the system functioning as designed. Additionally, he pointed out flaws in X's community notes system, which takes 15 to 24 hours to flag issues, while viral misinformation typically peaks within about four hours.
Accessibility of AI and Foreign Interference Concerns
Mark Frankel, head of public affairs at Full Fact, noted that while distorted content is common in conflicts, the scale concerning the Iran war has been "enormous". He attributed this partly to the increased accessibility of AI technology compared to earlier conflicts like the Ukraine war. Frankel explained that some users disseminate false content for profit, while foreign bot farms may exploit the situation for political advantage, potentially involving Russian or Iranian actors seeking to influence perceptions.
Calls for Regulatory Action and Independent Oversight
Chi Onwurah, chair of the Commons' Science, Innovation and Technology committee, argued that social media companies should not be permitted to "mark their own homework". The Labour MP urged the Government to introduce legislation regulating AI platforms and impose duties requiring risk assessments and reporting on legal but harmful content. She emphasised that without independent access to comprehensive data, it is impossible to gauge the extent of misleading content or the effectiveness of moderation systems.
Onwurah concluded that these steps are essential to fostering a safer online environment for all users, highlighting the critical need for transparent and accountable regulation in the digital age.



