What is algorithmic discrimination?

Discrimination and Artificial Intelligence (AI): Here's an overview of the topic.

Blog

12 May 2025 (update: 28 August 2025)

Auf Deutsch lesen

#discrimination

Pia Sombetzki
Senior Policy Manager

Algorithmic systems can make decisions that discriminate against people, for example when it comes to allocating social benefits or managing job applications. This phenomenon is often described as AI discrimination. When the systems’ decisions are based on data that contains biases, such biases are incorporated into the decisions – unless appropriate safeguards are put in place.

There are other sources of discrimination: Any underlying assumption that had an impact during the development of a model, or the purpose and the way in which an AI system is used. This can be seen in cases of AI racism, for instance, when a facial recognition system is primarily designed or trained to identify Black people, or when a system is used to measure employees’ performance without considering the needs of people with disabilities. Technically controlled decisions therefore do not necessarily deliver more “neutral” or “objective” results. These systems are not neutral themselves. People with their particular assumptions and interests influence how the systems are developed and used.

We conduct research on how and where algorithmic discrimination occurs. Have you noticed or experienced a case of potential algorithmic bias? Your report can help make algorithms fairer.

AI systems often discriminate against people who are already disadvantaged. In principle, however, anyone can be affected. Black people can be disadvantaged by systems that determine parole measures for criminals; people from poorer neighborhoods are classified as riskier in credit scoring or social scoring processes; and women can be filtered out by algorithms in automated hiring because of their gender, despite being qualified for the job. The consequences of such discrimination often go unnoticed because it is usually unclear how automated decisions come about. As a result, those affected are often unable to defend themselves.