TikTok is empowering its users with greater control over their digital experience by testing a new feature that allows them to limit the amount of artificial intelligence (AI) generated content appearing in their feeds. This initiative is a direct response to the significant surge in AI-made videos on the platform.
The Rise of AI Content and User Controls
The social media giant has revealed that it currently hosts a staggering 1.3 billion videos that are labelled as AI-generated. This explosion is largely driven by the proliferation of advanced video creation tools like OpenAI’s Sora and Google’s Veo 3. Despite this high number, AI content still represents only a minor fraction of the more than 100 million pieces of content uploaded to TikTok every single day.
Jade Nester, TikTok’s European director of public policy for safety and privacy, stated: “We know from our community that many people enjoy content made with AI tools, from digital art to science explainers, and we want to give people the power to see more or less of that, based on their own preferences.”
The new feature, which will be tested for a few weeks before a global rollout, is straightforward to use. To adjust their preferences, users simply need to open the app, navigate to settings, click on “manage topic” and select the option for “AI generated content.” This will join a list of other filterable topics such as fashion, beauty, and current affairs.
Addressing the 'AI Slop' Problem and Safety Policies
This move by TikTok comes amid growing concerns across social media about the quality and volume of AI-made material. The term “AI slop” has been coined to describe low-quality, mass-produced content that is often meaningless and contains unrealistic imagery. According to a report from the Guardian, nearly one in ten of the fastest-growing YouTube channels worldwide exclusively feature AI-generated content.
TikTok is reinforcing its existing policies to maintain safety and authenticity. The platform mandates that creators label videos with the term “realistic” if they are made with AI; failure to do so results in the video's removal. Its guidelines also explicitly prohibit the creation of dangerous deepfakes, such as those involving well-known individuals or depicting catastrophic world events.
Furthermore, an “AI-made” watermark is automatically applied to content created using TikTok's own AI tools. The company is also investing in education, partnering with organisations and experts to promote responsible AI use. This includes funding of £1.5 million for groups like Girls Who Code.
Controversy Over AI Moderation and Job Cuts
While embracing AI, TikTok is facing criticism for its plans to make 439 UK-based content moderators in its London trust and safety team redundant. Trade unions and online safety experts are worried that human moderators are being replaced by automated AI systems.
Brie Pegum, TikTok’s global head of program management for trust and safety, defended the decision. She explained that human moderation remains crucial, but AI helps protect employees by automatically filtering out the most harmful and distressing content before it reaches human reviewers. The platform reported a 76% decrease in graphic material viewed by human moderators over the past year thanks to these automated systems.
This new feature represents a significant step in giving users autonomy over their feeds, balancing the innovative potential of AI with the need for user choice and safety.