Google's Nano Banana Pro AI Accused of 'White Saviour' Bias and Logo Misuse
Google AI tool creates racialised 'white saviour' images

Google's latest artificial intelligence image generator, Nano Banana Pro, is facing significant criticism after research revealed it produces racially stereotyped visuals and inappropriately uses the logos of major humanitarian organisations.

Research Uncovers Deep-Seated Bias

An investigation found that the AI tool repeatedly generated images depicting a white woman surrounded by Black children when prompted with phrases like "volunteer helps children in Africa." In most of the dozens of images created, the background featured stereotypical grass-roofed or tin-roofed huts.

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies global health imagery, discovered the issue while experimenting with Nano Banana Pro earlier this month. "The first thing that I noticed was the old suspects: the white saviour bias, the linkage of dark skin tone with poverty," Alenichev stated. He was particularly struck by the unauthorised appearance of charity logos, which he did not request in his prompts.

Unauthorised Use of Charity Branding Sparks Outrage

In several generated images, the AI clothed the white female volunteers in T-shirts bearing the names and logos of real international charities. These included World Vision, Save the Children, Doctors Without Borders, and the Red Cross. One image even showed a woman in a Peace Corps shirt reading *The Lion King* to a group of children.

The charities involved have expressed serious concern and disapproval. A spokesperson for World Vision confirmed they had not been contacted by Google or Nano Banana Pro, nor had they given permission for their logo to be used or their work misrepresented. "These AI-generated images do not represent how we work," said Kate Hewitt, director of brand and creative at Save the Children UK. The charity raised concerns about the unlawful use of its intellectual property and is investigating potential actions.

A Wider Problem of AI Amplifying Stereotypes

This incident is not isolated. AI image generators like Stable Diffusion and OpenAI's Dall-E have been shown to replicate and exaggerate societal biases, often depicting professionals like lawyers or CEOs as white men. The NGO community is increasingly alarmed by a wave of AI-generated "poverty porn" appearing on stock photo sites, which they term "poverty porn 2.0."

When questioned by the Guardian, a Google spokesperson responded: "At times, some prompts can challenge the tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place." The company did not clarify why the tool appended real charity logos to its fabricated scenes.

The controversy highlights the ongoing challenge of eliminating harmful stereotypes from AI training data and the ethical and legal ramifications of AI systems generating content with protected intellectual property.