Political Deepfakes Surge, Blurring Reality and Influencing Beliefs
Political Deepfakes Rise, Influencing Beliefs Despite Fakery

Political Deepfakes Experience Dramatic Surge in Recent Years

According to research from the Governance and Responsible AI Lab, the volume of political deepfakes has escalated significantly over recent years. This trend is largely attributed to advancements in generative AI technology, which have made it easier for individuals to create realistic fake images and videos.

Fake Personas and Propaganda: The New Frontier

Online creators are not only fabricating content featuring prominent public figures but also inventing entirely fictional personas. These avatars, such as AI-generated women in military uniforms, are being used in contexts that generate revenue and serve as effective propaganda tools.

Daniel Schiff, an assistant professor of technology policy at Purdue University and co-director of the Governance and Responsible AI Lab, noted, "We are blending the lines between political cartoons and reality. A lot of people feel like these images or videos or the stories they convey, feel true."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Case Study: Jessica Foster and Monetization Strategies

In December 2025, an Instagram account for Jessica Foster, a blond AI-generated woman often depicted in US military uniform, gained over 1 million followers. The content included images of Foster in barracks, offices, and alongside figures like Donald Trump, with a focus on her feet in high heels.

This account was linked to OnlyFans, where users could purchase foot photos purportedly from Foster. Sam Gregory, executive director of Witness, explained, "A lot of the AI-generation is to basically get clicks and money or to drive people to a more lucrative place." The account has since been removed.

Political Influence and the Persuasiveness of Deepfakes

Deepfakes are also being leveraged for political purposes. During conflicts, such as the war in Iran, fake videos of female Iranian soldiers have circulated on social media, despite Iran prohibiting women from combat roles. Additionally, AI-generated content has been shared by political figures, including Donald Trump and Gavin Newsom, to influence public opinion.

Valerie Wirtschafter, a Brookings Institution fellow, stated, "The deepfakes are just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true." This highlights how such content can solidify existing beliefs, even when consumers are aware of its artificial nature.

The Threat of AI Swarms and Technological Solutions

Researchers warn that the technology behind deepfakes could evolve into "AI swarms," capable of autonomously coordinating and fabricating consensus. Wirtschafter compared this to "a troll farm without actually having to have people any more."

To combat this, initiatives like the Coalition for Content Provenance and Authenticity have developed standards for labeling AI-generated content. However, implementation has been inconsistent across platforms like LinkedIn, Pinterest, and Instagram, with labeling rates varying widely.

Gregory attributed this inconsistency to a "failure of political will at the senior levels" of major tech companies. He emphasized, "We don't need to give up on the ability to discern what is real from synthetic. But we do need to act fast."

As political deepfakes continue to proliferate, the challenge remains to enhance digital authenticity and mitigate their influence on public discourse and beliefs.

Pickt after-article banner — collaborative shopping lists app with family illustration