AI Fakes on Minneapolis and Venezuela Spread Rapidly - How to Identify Real Content
AI Fakes on Minneapolis and Venezuela - How to Spot Real Content

AI-Generated Content Floods Internet with False Narratives

Artificial intelligence is now generating vast quantities of content at unprecedented speed, filling social media timelines with videos ranging from the bizarre to the disturbing. Experts have revealed to Bryony Gooch that AI technology is poisoning an internet already saturated with disinformation, creating significant challenges for truth verification.

The Challenge of Discerning Reality in 2026

In today's digital landscape, it rarely takes more than a few scrolls to encounter AI-generated video content. From fabricated footage showing the capture of Venezuelan leader Nicolas Maduro by US forces to false videos depicting ICE agents fatally shooting members of the public in Minneapolis, millions are consuming AI-generated media about significant global events. This proliferation makes it increasingly impossible to distinguish between authentic and fabricated content.

"As AI videos continue to improve, it's becoming harder to trust what we see while scrolling through social media," explains Sofia Rubinson, senior editor at Newsguard's Reality Check. "Visual cues that once helped us spot fake content are no longer reliable, increasing the risk of misinformation spreading at scale — especially when AI fakes are amplified by well-known or verified accounts."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Political Actors Exploit the Blurred Lines

Within this largely unregulated information space, bad-faith political actors are now claiming that genuine videos are actually AI deepfakes. "What we now see is a real video will start circulating and they will claim it's an AI deepfake, which gives them plausible deniability," warns Professor Alan Jagolinzer, co-chair of the Cambridge Disinformation Summit. "That's actually part of the danger here, and arguably it's more insidious than people buying into a fake video."

Even the White House recently provoked outrage after sharing a digitally altered photograph of an activist arrested for organizing an anti-ICE protest at a Minnesota church. The image had been edited using AI technology to make it appear as though the woman was crying. Digital forensics expert Hany Farid, a professor at the University of California Berkeley, confirmed the image was likely edited with AI.

"This is not the first time that the White House has shared AI-manipulated or AI-generated content," Farid told CBS News. "This trend is troubling on several levels. Not only are they sharing deceptive content, they are making it increasingly more difficult for the public to trust anything they share with us."

Specific Examples of AI Fabrications

One particularly viral AI-generated video purported to show a Somali woman caught at Minneapolis-St. Paul Airport attempting to smuggle $800,000 of welfare support cash out of the country. Viewed by millions across multiple platforms, the footage appeared to reference allegations about Minnesota's Somali community participating in wide-scale social services fraud, which supposedly prompted ICE agents to swarm the state.

The video shows a woman in a headscarf appearing outraged in front of a suitcase filled with neatly arranged cash. "You have no right! That's my property, all of it," she yells while airport security responds: "I know my rights."

Jeremy Carrasco, a media consultant specializing in AI media literacy, told The Independent he was "95 per cent sure" the clip was generated using OpenAI's Sora 2 without a watermark. The main giveaway was the suitcase itself. "This looks like a briefcase size. This doesn't look like any luggage size that we would have in the United States," he explained, noting it had an unusually large shell. "If she was walking with this, the cash would have just shaken around because there would have been too much space in the suitcase."

Pickt after-article banner — collaborative shopping lists app with family illustration

Global Impact Beyond American Borders

Outside the United States, AI-generated content has similarly promoted false narratives about major global incidents including the arrest of Maduro, protests in Iran, and the recent antisemitic terror attack on Bondi Beach. According to Carrasco, the most important step people can take to distinguish real from fake is remarkably simple: evaluate the source.

"If you don't trust the source or you've made a judgment that you can't, move on with your day," he advises. "There are a lot of indications. For example, a page that just reposts a ton of different content from everywhere isn't going to be able to discern if they're reposting an AI video or a real video either."

Carrasco emphasizes the importance of seeking original sources or consuming information through accredited news organizations with authentication departments. "You need to look at the original source or consume this through a news organisation that is accredited and has an authentication department. Just make sure that they've ruled out that it isn't a real video that's been modified."

Technical Clues for Detection

While AI-generated images can appear convincing in the foreground, Carrasco notes that "the background is oftentimes where a lot of the bodies are buried." He highlights a debunked AI-generated video claiming to show Venezuelans crying in celebration of Maduro's arrest.

"I think anyone can see that the guy at the back is actually holding a flag at first and then after it goes behind her head he is no longer holding the flag," Carrasco points out.

By examining the edges of frames in AI-generated videos, viewers can often find elements entering the scene that don't make logical sense, including inconsistent hand positions or changing facial features.

The Path Forward Requires Critical Thinking

Addressing the impact of AI-generated content on politics and society will require "patience and evidence," according to experts. However, individuals can protect themselves by asking basic critical thinking questions.

"Try to assess, not just the message, but the incentives behind the message," suggests Professor Jagolinzer. "So who is communicating and what are they getting after? What's in it for them to send that message out? I think people forget that when we communicate, we have incentives, we have a reason to communicate. So I try to get people to think about 'why are they telling me this? what's in it for them?'"

The scale of false information today is "worse than anything we've seen before" due to the easy accessibility of generative AI applications, Carrasco concludes. "It's not only a question about how we detect individual things, but also about how society is processing this new wave, this flood of fake images and fake videos that they're seeing every day."