Google's Nano Banana Pro AI Accused of Racial Bias in Aid Imagery
Google AI tool creates 'white saviour' images with charity logos

Google's latest artificial intelligence image generator, Nano Banana Pro, is facing significant criticism after research revealed it produces racially biased and stereotypical visuals in response to prompts about humanitarian work in Africa. The tool has been found to consistently generate images depicting white women as volunteers surrounded by Black children, a trope often described as the 'white saviour' narrative.

Research Uncovers Persistent Stereotypes

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies global health imagery, conducted the investigation earlier this month. He prompted the AI tool dozens of times with the phrase "volunteer helps children in Africa". In all but two instances, the generated image showed a white woman with a group of Black children, frequently set against a backdrop of grass-roofed or tin-roofed huts.

"The first thing that I noticed was the old suspects: the white saviour bias, the linkage of dark skin tone with poverty and everything," Alenichev stated. He was further alarmed to discover that the AI was inserting the logos of major, real-world charities into these images without any prompting to do so.

Unauthorised Use of Charity Branding Sparks Outrage

The AI-generated images featured T-shirts and vests bearing the logos and names of prominent humanitarian organisations. These included World Vision, Save the Children, Doctors Without Borders, and the Red Cross. In one specific image, a woman wore a Peace Corps T-shirt while reading 'The Lion King' to children.

The charities involved have expressed serious concern and disapproval. A spokesperson for World Vision confirmed they had not given Google or Nano Banana Pro permission to use or manipulate their logo, stating the images "do not represent how we work". Kate Hewitt, Director of Brand and Creative at Save the Children UK, labelled the use of their intellectual property for AI content generation as neither legitimate nor lawful, adding that the organisation is exploring what action it can take.

A Wider Pattern of AI Bias and 'Poverty Porn 2.0'

This incident is not an isolated case but part of a well-documented pattern where AI image models amplify societal biases. Previous studies have shown tools like Stable Diffusion and OpenAI's Dall-E predominantly generate images of white men for prompts like "lawyers" or "CEOs," while associating people of colour with negative stereotypes.

The NGO sector is increasingly worried about the proliferation of AI-generated images depicting extreme, racialised poverty on stock photo sites, a trend some are calling "poverty porn 2.0". These tools risk cementing harmful stereotypes and misrepresenting complex realities in the global south.

When questioned by The Guardian, a Google spokesperson responded: "At times, some prompts can challenge the tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place." The company did not clarify why Nano Banana Pro appended real charity logos to its fabricated scenes.