#discrimination

Publication, 21 June 2022

How to combat algorithmic discrimination? A guidebook by AutoCheck

We are faced with automated decision-making systems almost every day and they might be discriminating, without us even knowing about it. A new guidebook helps to better recognize such cases and support those affected.

Read more

Position, 24 March 2022

Algorithmic Discrimination – How to adjust German anti-discrimination law

In their coalition treaty, the new German government has signaled their intention to evaluate the German anti-discrimination law (Allgemeine Gleichbehandlungsgesetz – AGG). We demand for them to account for the special features of algorithmic discrimination, for instance by considering the right to collective redress mechanisms to better protect the rights of those affected.

Read more
Gareth Harrison ǀ Unsplash

Story, 4 February 2022

Costly birthplace: discriminating insurance practice

Two residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.

Read more

Story, 13 January 2022

Fixing Online Forms Shouldn’t Wait Until Retirement

A new Unding Survey is investigating discrimination in online forms. But operators are already getting angry emails. Behind some: a recently retired IT consultant with one of the most common surnames in the world and 30 years experience of not being able to sign up.

Read more
Photo by Souvik Banerjee on Unsplash

Story, 31 August 2021

LinkedIn automatically rates “out-of-country” candidates as “not fit” in job applications

A feature on LinkedIn automatically rates candidates applying from another EU country as “not a fit”, which may be illegal. I asked 6 national and European agencies about the issue. None seemed interested in enforcing the law.

Read more
Julia Bornkessel, CC BY 4.0

Interview with Jessica Wulf, 25 May 2021

“We’re looking for cases of discrimination through algorithms in Germany.”

The project AutoCheck investigates the risks for discrimination inherent in automated decision-making systems (ADMS). In this interview, project manager Jessica Wulf talks about the search for exemplary cases and how the project will support counselling centers and further education on the topic.

Read more
Fraktion DIE LINKE |Flickr.com

Story, 6 April 2021

Europeans can’t talk about racist AI systems. They lack the words.

In Europe, several automated systems, either planned or operational, actively contribute to entrenching racism. But European civil society literally lacks the words to address the issue.

Read more

Project, 9 March 2021

AutoCheck – Mapping risks of discrimination in automated decision-making systems

Read more

Story, 4 December 2020

Health algorithms discriminate against Black patients, also in Switzerland

Algorithms used to assess kidney function or predict heart failure use race as a central criterion. Continue reading the story on the AlgorithmWatch Switzerland website

Read more
Joshua Hoehne | Unsplash

Story, 18 October 2020

Automated discrimination: Facebook uses gross stereotypes to optimize ad delivery

An experiment by AlgorithmWatch shows that online platforms optimize ad delivery in discriminatory ways. Advertisers who use them could be breaking the law.

Read more
Jon Russell |flickr

Story, 17 September 2020

Female historians and male nurses do not exist, Google Translate tells its European users

An experiment shows that Google Translate systematically changes the gender of translations when they do not fit with stereotypes. It is all because of English, Google says

Read more
Ivan Aleksic |Unsplash

Story, 12 August 2020

Algorithmic grading is not an answer to the challenges of the pandemic

Graeme Tiffany is a philosopher of education. He argues that replacing exams with algorithmic grading, as was done in Great Britain, exacerbates inequalities and fails to assess students' abilities.

Read more
NeONBRAND | Unsplash

Story, 15 June 2020

Undress or fail: Instagram’s algorithm strong-arms users into showing skin

An exclusive investigation reveals that Instagram prioritizes photos of scantily-clad men and women, shaping the behavior of content creators and the worldview of 140 millions Europeans in what remains a blind spot of EU regulations.

Read more

Story, 19 May 2020

Automated moderation tool from Google rates People of Color and gays as “toxic”

A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic than others.

Read more
Luis Villasmil |Unsplash

Story, 28 April 2020

Unchecked use of computer vision by police carries high risks of discrimination

At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them.

Read more
Marko Bugarski|Unsplash

Story, 7 April 2020

Google apologizes after its Vision AI produced racist results

A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.

Read more