“We’re looking for cases of discrimination through algorithms in Germany.”

The project AutoCheck investigates the risks for discrimination inherent in automated decision-making systems (ADMS). In this interview, project manager Jessica Wulf talks about the search for exemplary cases and how the project will support counselling centers and further education on the topic.

Julia Bornkessel, CC BY 4.0

Automated systems are often deployed in hopes that they will make decisions more objectively than humans. But why is it that such systems pose a particular risk of treating people in a discriminatory or unequal way?

Unfortunately, it has been shown that automated systems, even if they were developed and deployed with good intentions, produce discriminatory results, directly or indirectly. They are therefore neither automatically neutral nor objective. During the development and implementation of automated systems, humans repeatedly make decisions and thereby – often unconsciously – integrate their prejudices and biases into the system. Ultimately, this can lead to discriminatory decisions. 

I have been working on the topic of unconscious bias for many years, most recently as Gender and Diversity Policy Officer at the Vienna University of Economics and Business. Unconscious biases are attitudes, values, assumptions, and prejudices about people that we are not aware of but which influence our perceptions, decisions and behavior significantly. They are expressions of systematic inequality in our society and can be found in all areas of life. In automated decision-making, this plays an essential role. It is merely a new field of application but with potentially far-reaching consequences.  

When biases are inscribed into automated systems and those systems make unfair or even discriminatory decisions or recommendations, then the “decision-making” happens according to this pattern in all cases. If, for example, a recruiting software discriminates against women, then women are discriminated against in every decision that is based on that software. It is by this mechanism that potentially many people are affected and the overall societal impact can be large. 

What concrete examples are there of discrimination through automated systems?

A very well-known example from the US concerns facial recognition software. The researchers Joy Boulamwini and Timnit Gebru discovered that common facial recognition software developed by big companies such as IBM, Google and others was much worse at identifying black women than white men. The higher error rate was alarming because the software was deployed by some US police forces.

Another well-known example involves a recruiting software which was apparently developed by Amazon. The goal was to increase efficiency in recruiting processes. While the software was still in its testing phase, it turned out that the software’s recommendations discriminated against women by sorting out their resume. According to Amazon, they ended the project before they deployed the software because attempts to remediate the discrimination against women failed. 

What about examples from Germany? 

Up until now, there are only a few known cases in Germany, for instance a passport photo machine that did not work reliably for black people (in German). This is despite the fact that automated systems are increasingly deployed in Germany, as shown by our Automating Society Report 2020. Therefore, the project AutoCheck also aims at finding cases in Germany. This is especially important as the legal framework differs greatly from country to country, for example between the US and Germany. Case studies are important to reflect the current situation as well as to better illustrate the topic.

Why is it so difficult to find such cases?

Let me illustrate this with an example. Last year, AlgorithmWatch conducted an experiment in which two fictitious job advertisements were published on Facebook: One advertisement for a truck driver and one for a child care worker. The job ad for a truck driver was 10-times more likely to be displayed to men than women, and a job ad for an educator was 20 times more likely to be displayed to women than men — apparently a very unequal distribution.

If I would search for a job as a truck driver, then, as this experiment has shown, I would not see all the job advertisements for it. And as I do not see those job advertisements, I also cannot know that they exist in the first place and that they are just not shown to me. Probably, I am not even aware that an algorithm is used here and different job postings are shown to different people, even if I haven't set it up that way myself. 

What I want to demonstrate with this example is the following: In most cases, people affected have no clue that they were treated differently than others, and which criteria this differential treatment is based on. The example shows the complexity of the topic and the difficulty to find case studies. It is precisely this complexity that motivates me to work on AutoCheck — and the prospect of contributing to the discussion about fair ADM systems.

What is your plan with the German case studies?

We plan to develop tools as well as guidelines that explain and illustrate the risk of discrimination posed by automated systems. This could, for example, come in the form of checklists or decision trees. The guidelines, together with the case studies, then serve as a basis for training courses at anti-discrimination counselling centers. Such counselling centers are crucial contact points for people affected by discrimination. The training will be designed to help employees better understand automated systems and recognize associated risks. In this way, those affected can then be better supported.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.