Social media platforms including TikTok, Facebook, and X are hosting hundreds of AI-generated deepfake videos featuring manipulated footage of real doctors and health experts. These videos are being used to spread health misinformation and promote unproven dietary supplements, a major investigation has revealed.
Sinister New Tactic Uncovered by Fact-Checkers
The fact-checking organisation Full Fact uncovered the widespread campaign, publishing its findings on Friday. The investigation identified a vast network of videos where the likeness and voice of respected medical professionals have been digitally altered using artificial intelligence.
These deepfakes take real footage of experts sourced from the internet and rework both the pictures and audio. The result is a convincing but entirely fabricated endorsement, where the cloned individuals appear to encourage women experiencing menopause to purchase specific products. These include probiotics and a substance called Himalayan shilajit from a US-based firm named Wellness Nest.
"This is certainly a sinister and worrying new tactic," said Leo Benedictus, the fact-checker who led the investigation. He explained that the creators deploy AI so that "someone well-respected or with a big audience appears to be endorsing these supplements to treat a range of ailments."
Real Doctors Targeted and Their Words Distorted
Among those impersonated is Professor David Taylor-Robinson, a public health and inequalities expert at the University of Liverpool. In August, he was shocked to discover 14 doctored videos on TikTok falsely showing him recommending unproven Wellness Nest products.
Despite his specialism in children's health, one deepfake depicted a cloned version of him discussing a fabricated menopause symptom called "thermometer leg." The fake Taylor-Robinson directed viewers to the Wellness Nest website to buy a "natural probiotic" with supposed benefits for menopausal symptoms.
"It was really confusing to begin with – all quite surreal," said the real professor, who was alerted by a colleague. "I didn't feel desperately violated, but I did become more and more irritated at the idea of people selling products off the back of my work and the health misinformation involved."
The original footage was stolen from a 2017 Public Health England conference talk on vaccination and a May parliamentary hearing on child poverty. In one particularly egregious deepfake, he was even depicted swearing and making misogynistic comments.
TikTok removed the videos six weeks after Taylor-Robinson complained, a process he described as a "faff." Initially, the platform claimed only some videos violated its guidelines, a response he called "absurd."
Calls for Action and Platform Responsibility
The investigation also found deepfakes of Duncan Selbie, the former chief executive of Public Health England, and other high-profile figures like Professor Tim Spector and the late Dr Michael Mosley. All were falsely linked to Wellness Nest or its UK-linked outlet.
These revelations have prompted urgent calls for social media companies to take greater responsibility. Helen Morgan, the Liberal Democrat health spokesperson, stated: "From fake doctors to bots that encourage suicide, AI is being used to prey on innocent people and exploit the widening cracks in our health system."
She called for AI deepfakes posing as medical professionals to be "stamped out" and argued that if individuals fraudulently pretended to be doctors they would face prosecution, questioning why the "digital equivalent" is being tolerated.
In response to the findings, a TikTok spokesperson said: "We have removed this content for breaking our rules against harmful misinformation and behaviours that seek to mislead our community, such as impersonation." The spokesperson acknowledged that "harmfully misleading AI-generated content is an industry-wide challenge" and stated the platform continues to invest in new detection methods.
Wellness Nest told Full Fact that the deepfake videos were "100% unaffiliated" with its business and that it had "never used AI-generated content," but added it "cannot control or monitor affiliates around the world."
The case highlights the escalating threat of AI-generated disinformation in the health sector and the pressing need for robust regulatory and platform-led responses to protect the public from digitally manipulated fraud.