TikTok is confronting the threat of a UK employment tribunal as two of its content moderators prepare legal action, alleging the social media giant engaged in 'union-busting' by slashing hundreds of safety roles. The move has ignited fears for online safety and exposed a pressured workplace culture where employees claim they are 'pitted against each other'.
The Allegations of Union-Busting and Unfair Dismissal
In a significant escalation, two content moderators from TikTok's London-based trust and safety team have sent a formal legal letter to the company. They allege automatic unfair dismissal and unlawful detriment, centring their case on events in August 2025.
The workers claim a ballot was scheduled for 29 August 2025 for the team to vote on unionising with the United Tech & Allied Workers (UTAW). Crucially, just seven days before this vote, on 22 August 2025, TikTok announced it would be cutting hundreds of jobs from that very team as part of a 'global reorganisation'.
TikTok stated the cuts were part of a shift towards using more automated systems and artificial intelligence (AI) for content moderation. However, unions and the affected employees argue the timing was deliberate, labelling it 'bare-faced union busting' designed to thwart the organising effort. The moderators are asking an employment tribunal to order their reinstatement and compensation for lost pay if they are made redundant.
TikTok has firmly rejected the claims as 'baseless', reiterating that the changes were part of a wider global operational shift.
'A Culture of Pressure': Inside TikTok's Moderation Grind
Whistleblowers speaking to The Independent have depicted a 'stressful' and 'high pressure' working environment that motivated the push to unionise. Moderators are governed by a strict 'utilisation rate' system—software that tracks their activity and penalises inactivity against fixed time limits, regardless of video content complexity.
Performance is graded, with bonuses tied to statistics. One moderator revealed she was once expected to review 1,200 videos per day. 'We have to focus every single day, every single hour, we always have to be on target,' she said. The relentless screen time and pressure to meet 'arbitrary' targets have reportedly led to health issues, with one worker seeking help from a neurologist for headaches.
'You are pitted against each other,' one source described, adding that some colleagues skipped breaks to achieve higher grades. Lawyers representing the workers say the intense demands leave staff feeling their brains are being 'burned'.
The High-Stakes Consequences for Online Safety
The planned job cuts have raised serious alarms about the future of safety on one of the world's most popular platforms. Dame Chi Onwurah, chair of the Commons Science, Innovation and Technology Committee, warned in November 2025 that the reductions posed a 'real risk to the lives of TikTok users'.
Current employees are deeply sceptical that AI is ready to take over. One moderator stated he sees TikTok's AI make mistakes 'all the time', citing examples where finger-gun gestures are misidentified as real weapons or stains on a wall are flagged as blood. In comments, AI often fails to catch users employing coded emojis to evade detection.
Whistleblowers expressed particular concern for child safety, with one moderator estimating 90 per cent of the content she reviews is from children, many apparently under the platform's minimum age of 13. A significant portion involves suicide-related references, requiring nuanced, empathetic judgement. 'When we moderate, I think you have to use human emotion... I don't think that is something AI can do,' she said.
While TikTok asserts that 91 per cent of violating content is removed by automation, moderators fear remaining human roles will be outsourced to lower-paid countries, potentially losing vital cultural context.
Lawyers have given TikTok one month to respond to the legal letter, with a reply anticipated by late January 2026. The company's next move—whether to negotiate or fight in court—will determine the fate of the moderators and set a critical precedent for tech worker rights and online safety in the UK.