Allegations of racial bias towards Caucasians have surfaced against Google Gemini, an AI chatbot known for its image-generation abilities and more. According to aggrieved users voicing their concerns on X, the AI tool is seemingly resistant to generating images of white individuals, a bias observed upon testing it with diverse prompts such as Popes, Vikings, American Revolutionary soldiers, and more.
This immense power, however, comes with substantial responsibility and a set of ethical, social, and political challenges.
One of the primary concerns centers around the individuals and organizations that program and control these AI systems. Since AI algorithms are designed and trained by humans, they inherently carry the biases, values, and objectives of their creators. As such, those who develop and deploy AI technologies could gain unprecedented influence and power over society. This power dynamic is especially pertinent when considering Big Tech companies, which are already influential due to their size, reach, and control over vast amounts of data.
Observations made so far indicate a peculiar lack of white figures in the images created by the Gemini application, despite its wide-ranging capabilities.
The Gemini AI bot appears to largely exclude white people from the generated images.
A series of tests has further confirmed these biases, experimenting in roundabout ways to encourage the AI to depict white individuals.
Regardless, indirect prompts for images featuring a medieval knight, a country music fan, a Revolutionary War soldier, and Vikings, still produced images virtually exclusive of white people.
These tests have led many to question if the “diversity” consideration in Google’s algorithm might be responsible for the apparent exclusion of white individuals in image generation.