Gender Gap in AI Use: Women More Wary of Ethical Risks, Study Finds
Women use AI less than men due to ethical concerns

In an age where the ethics of our food, clothes, and media are constantly scrutinised, a new frontier of moral consumerism is emerging: artificial intelligence. A recent study has uncovered a significant gender divide in the adoption of generative AI, suggesting that women are far more cautious about its potential harms than men.

The Gender Gap in AI Adoption

Research published in December 2025 revealed a substantial disparity in how men and women use tools like ChatGPT and Gemini. The study found that women are using generative AI up to 18% less than men. The authors suggested this gap may stem from women exhibiting "more social compassion, traditional moral concerns, and pursuit of equity."

"Greater concern for the social good may partly explain women's lower adoption of GenAI," the researchers concluded. Their ethical worries are multifaceted, ranging from fears that using chatbots for work constitutes cheating, to profound anxieties about data privacy, entrenched societal biases, and the potential for AI to facilitate unethical or violent behaviour.

From Unethical Examples to Foundational Flaws

Sometimes, identifying unethical AI is straightforward. In recent weeks, Grok – the chatbot created by Elon Musk's xAI and integrated into X (formerly Twitter) – was used to generate sexualised and violent imagery, particularly of women. This behaviour highlights a core issue: AI systems, devoid of inherent morality, will strive to fulfil any request unless explicitly constrained by their creators.

Campaigner Laura Bates, author of The New Age of Sexism: How the AI Revolution is Reinventing Misogyny, has long warned of these dangers. She argues that unchecked AI can amplify misogyny, harassment, and inequality. Giving evidence to the Women and Equalities Committee in the House of Commons last year, Bates stressed that ethical AI must be designed with these risks in mind, cautioning that we are repeating the mistakes made with social media two decades ago, but at a greater scale.

The ethical quandaries begin at the very foundation of large language models. These systems are trained on vast datasets scraped from the internet, often with little regard for copyright or creator consent. This has led to high-profile legal battles, such as the case where a US judge ruled that Anthropic's use of pirated books fell under "fair use," while simultaneously reprimanding the company for copying over 7 million copyrighted texts to train its model, Claude.

The Search for an Ethical Framework

In response to these challenges, AI companies are attempting to codify ethical principles. Anthropic states it used "constitutional AI" based on the Universal Declaration of Human Rights to build Claude, instructing it to choose responses that encourage "freedom, equality, and a sense of brotherhood." However, the company admitted early versions became "judgemental or annoying," requiring additional rules to avoid sounding condescending.

Similarly, DeepMind employs a "robot constitution" for physical robots, blending grand Asimovian laws with practical safety rules. The commitment to transparency varies; French firm Mistral emphasises open-source development, while it was noted that at a recent AI summit, the UK and US governments refused to sign a pledge for ethical and safe AI endorsed by 60 other nations.

Faced with consumer backlash, as seen with Musk's Grok this week, the solution for users may become a matter of old-fashioned taste and ethical preference. Choosing an AI might soon involve the same careful consideration as buying a sustainably sourced sofa – a conscious decision reflecting our values in a complex, technologically charged world.