Image classification algorithms at Apple, Google still push racist tropes

Automated systems from Apple and Google label characters with dark skins “Animals”.

Nicolas Kayser-Bril
Reporter

Roy A., a Berlin-based lawyer who also campaigns against discrimination, recently told AlgorithmWatch that some pictures depicting Black people appeared under the label “Animal” (“Tier” in German) on his iPhone 8.

Apple added the automated labeling feature to iOS in 2016. According to the company, the process takes place entirely on the user’s device. Perhaps due to the phone’s limited resources, it labels images erratically. An AlgorithmWatch colleague, who is white, reported that her children had been labeled “Animal” in the past.

However, a slight modification to the image reported by Mr A. was enough for Apple not to give it the “Animal” label.

While both images were labeled “People”, only the characters with darker skin tones were labeled “Animals” by Apple.

The behavior is not unique to Apple’s software. We processed both images in Google Vision, an online service for image labeling. The image depicting characters with darker skin tones was labeled “Mammals”, the other was not. Other labels, including “Human Body”, were similar for both pictures.

While biologically correct, a cursory search on Google Images shows that, in Google’s taxonomy, the label “Mammal” applies only to animals.

We tested other cartoon characters in Google Vision. In most cases, a dark overlay on the character’s skin was enough for the system to label it an animal instead of a human.

Tracy Frey, a director for Responsible AI at Google, told AlgorithmWatch that "this result [was] unacceptable" and that "multiple teams ... [were] investigating to understand what went wrong. [Machine Learning] models learn from existing data collected from the real world, and so an accurate model may learn or even amplify problematic pre-existing biases in the data based on race, gender, religion or other characteristics. Avoiding the creation or propagation of unfair bias is one of our core AI Principles and the work of fighting unfair bias is never complete; we are always striving to be better," Ms Frey added.

Apple did not answer requests for comments sent by email on 12 May.

No isolated case

Google’s image labeling services have a history of producing discriminatory and racist outputs. In 2015, Google Photos labeled individuals with dark skin tones “gorillas”. The company apologized but, according to a report by Wired, did not solve the problem. Instead, it simply stopped returning the “gorilla” label, even for pictures of that specific mammal.

In 2020, AlgorithmWatch revealed that a hand with dark skin holding an infrared thermometer was labeled “gun”, while the same hand with a light overlay was labeled “tool”. The company apologized and now only displays the “gun” label when its software recognize one with a high level of certainty.

It seems, from these examples, that Google has yet to address the root cause of the racist biases in its image labeling services.

Lacking diversity

There are many reasons why Apple’s and Google’s image labeling systems display racist results. It could be that the training data sets are unbalanced and have few Black people in them. It could be that the people who tagged images in these training data sets purposefully used racist tropes in their tagging. It could be that the models developed by the engineers do not optimize for fairness. Absent an independent audit of these systems, the precise causes cannot be identified.

Both Apple and Google remain overwhelmingly white. The proportion of Black workers in tech and management roles at both companies increased in the past 7 years, but remains in the low single-digits. At Apple, the proportion of Black workers in tech remained flat.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

Old prejudice

Denying racialized people their humanness is a core component of modern racism. Several racist scholars of the 18th and 19th centuries, such as Christoph Meiners or Louis Agassiz, claimed that non-whites had a different origin, thereby considering racialized people as belonging to a different species. In general, proponents of racism agreed that racialized people were closer to animals than whites.

Viewing racialized others as non- or less human had concrete embodiments. Racialized people were regularly displayed in ethnographic exhibitions, behind fences or alongside animals. The last such event in Europe was organized in 1994, in France.

Labeling Black characters “Mammals” or “Animals” is not a bug that can be “fixed”. It is the continuation of over two centuries of institutional racism.

A man from the Malabar Coast of India in Paris in 1902.

Edited 14 May 21:10 to add Google's statement.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.