Online Violence and Deepfakes Push Women Out of Work, Survey Finds
Online Violence and Deepfakes Force Women from Jobs

A recent global survey has found that online violence against women, including cyberflashing, deepfakes, and unwanted messages, is pushing many out of public life and their careers. More than a quarter of women have received unwanted intimate images or "sexts," known as cyberflashing, and one in four have experienced depression or anxiety as a result.

Survey Details

The report, titled "Tipping point: Online violence impacts, manifestations and redress in the AI age," was published by UN Women and conducted by researchers at City St George's, University of London. Between August and November 2025, they surveyed 641 women working in human rights, activism, and journalism across 119 countries. The findings revealed that 27% of respondents received unwanted intimate images, sexual innuendos, or nonconsensual sexting. Additionally, 12% had their personal images, including intimate ones, shared without consent, and 6% were subjected to deepfakes or manipulated images and videos.

Mental Health Consequences

These deliberate and often coordinated attacks had alarming effects on mental health. Approximately 24% of women surveyed experienced anxiety and depression following online violence, while 13% reported being diagnosed with Post-Traumatic Stress Disorder (PTSD). Almost one in five (19%) said they were self-censoring at work as a result, and 41% self-censored on social media to avoid abuse.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

One journalist elaborated on her survey results, stating she was "unable to cope" and resigned from her job in December 2023 due to online violence. She said, "I am now sitting at home, focussed solely on restoring my mental wellness. This necessary retreat has caused severe financial problems," adding that it was the result of being "forced into silence out of work."

Underreporting and Legal Action

The vast majority of cases go unreported. Only 25% of respondents had reported incidents to the police, and 15% had taken legal action. Just 10% said charges were successfully brought against their abusers. Lea Hellmueller, Associate Professor of Journalism at City St George's, commented, "The chilling effect of online violence is pushing women out of public life. Law enforcement is outsourcing the responsibility for protection to the survivors by telling women to remove themselves from social media, to avoid speaking publicly about controversial issues, to move into less visible roles at work, or to take leave from their respective careers. This shows that avoidance techniques – self-censorship or quitting – are still significantly more likely to be used by women rather than resistance techniques such as reporting online attacks to the police."

Government Response

A government spokesperson said: "Vile, misogynistic online abuse has no place in the UK, which is why we have made the creation of deepfake intimate images without consent a crime. We are also banning AI tools which generate deepfake sexual images of people without consent, with developers and suppliers facing up to three years in prison. We are forcing platforms to remove non-consensual intimate images within 48 hours, and holding tech bosses personally liable if they fail to comply with Ofcom’s decisions. We have also made cyberflashing a priority offence under the Online Safety Act, meaning platforms have to proactively tackle these images before they reach women. We will keep acting until the online world protects women from abuse and exploitation."

Pickt after-article banner — collaborative shopping lists app with family illustration