AutoCheck – Mapping risks of discrimination in automated decision-making systems

Automated decision-making (ADM) processes are becoming a fact of life in more and more areas: They assess our creditworthiness, set individualized prices on online retail platforms or suggest which movies to watch. However, their use does not necessarily lead to less-biased decisions. ADM systems can perpetuate or even reinforce inequality and discrimination. For example, a study by AlgorithmWatch showed that job ads published on Facebook were displayed to different audiences based on gross stereotypes: A job ad for a truck driver was 10 times more likely to be displayed to men than women, and a job ad for an educator was 20 times more likely to be displayed to women than men. If ADM systems decide in a discriminating way or suggest discriminating decisions, many people are potentially affected – because all decisions follow the same pattern. These risks are systematic and the overall societal impact can be large.

In this context, anti-discrimination counseling centers play an important role, as they act as central points of contact to educate, counsel and support those affected. However, it is difficult to identify discrimination through ADM systems and unclear how affected persons or counseling centers can deal with the issue. Therefore, in our project AutoCheck – a Guide about Automated Decision-Making Systems for Equality Bodies, we develop concrete and comprehensive instructions and tools for anti-discrimination counseling centers and design training courses. With these we will build capacity of the employees of German anti-discrimination counseling centers. As a result, the employees can better assess and recognize risks, and thus can better support individuals affected by discrimination resulting from the use of ADM systems.

Project Manager

Jessica Wulf
wulf@algorithmwatch.org

Funded by