A major new investigation has uncovered a disturbing trend on one of the world's most popular social media platforms. Research indicates that videos created using artificial intelligence, which contain strongly anti-immigrant narratives, are being viewed billions of times on TikTok. This revelation raises profound concerns about the role of AI in spreading harmful content and the effectiveness of platform moderation.
The Scale of the Problem
The findings come from a detailed report by the Institute for Strategic Dialogue (ISD), a respected think-tank focused on extremism and disinformation. Their analysts spent months examining content on TikTok, specifically looking for material that was likely generated by artificial intelligence tools. What they discovered was alarming in both volume and reach.
The report identified a significant network of accounts posting AI-generated videos that portrayed immigrants and immigration in a relentlessly negative light. This content often used synthetic voices, AI-created images, and manipulated footage to push a hostile agenda. Crucially, the ISD's analysis suggests this is not a marginal issue. Collectively, these videos have garnered a staggering 4.5 billion views, indicating they are being served to a vast, mainstream audience by the platform's powerful recommendation algorithm.
How AI Fuels the Fire
The use of AI tools fundamentally changes the landscape of creating and spreading divisive content. Previously, producing polished, engaging video required significant skill, time, and resources. Now, readily available generative AI applications allow almost anyone to create convincing and emotionally charged clips with minimal effort.
This content often takes the form of short, impactful videos that use AI-generated narrators or synthetic voices to deliver scripted monologues. They frequently pair this with manipulated or AI-created imagery designed to provoke fear or anger. The ease of production means that once a successful narrative template is found, it can be replicated and varied endlessly, flooding the platform with near-identical messages.
Perhaps most worryingly, the ISD report highlights that this AI-generated material is not confined to obscure corners of TikTok. Instead, the platform's own "For You" feed algorithm is actively promoting these videos to users who may not have been seeking out such content, effectively radicalising viewers through passive consumption.
Platform Response and Regulatory Pressure
In response to the report's findings, a spokesperson for TikTok stated that the platform has clear policies against hate speech and is investing in technology to detect and remove synthetic media that violates its rules. They emphasised that "AI-generated content that includes realistic images must be clearly labelled" under their policies.
However, the sheer scale of the content identified by the ISD suggests that TikTok's enforcement mechanisms are failing to keep pace. The report argues that the platform's systems are struggling to consistently identify AI-generated material, especially when it does not use photorealistic human faces that are easier for detectors to flag.
This situation is set to increase pressure on regulators, particularly the UK's Ofcom, which is in the process of implementing the new Online Safety Act. The Act places a legal "safety duty" on tech companies to protect users from harmful content. This latest research provides concrete evidence of a systemic failure, potentially paving the way for stricter enforcement and hefty fines if platforms do not demonstrate more effective control over AI-fuelled disinformation.
A Broader Crisis of Trust
The proliferation of AI-generated anti-immigrant content on TikTok is symptomatic of a wider crisis. As generative AI tools become more sophisticated and accessible, the potential for them to be weaponised to sow social division and undermine democratic discourse grows exponentially. This is not just a moderation challenge for one platform; it is a fundamental test for society's ability to navigate an information ecosystem where distinguishing truth from AI-generated fiction is increasingly difficult.
The ISD's report serves as a stark warning. Without urgent and coordinated action from platforms, regulators, and policymakers, the algorithmic amplification of AI-created hate and disinformation threatens to poison public debate and exacerbate real-world social tensions. The billions of views recorded are not just a metric; they represent the scale of the audience being exposed to digitally manufactured prejudice.