#discrimination (19 results)

Page 2 of 2

Project, 9 March 2021

AutoCheck – Mapping risks of discrimination in automated decision-making systems

Read more

Story, 4 December 2020

Health algorithms discriminate against Black patients, also in Switzerland

Algorithms used to assess kidney function or predict heart failure use race as a central criterion. Continue reading the story on the AlgorithmWatch Switzerland website

Read more
Joshua Hoehne | Unsplash

Story, 18 October 2020

Automated discrimination: Facebook uses gross stereotypes to optimize ad delivery

An experiment by AlgorithmWatch shows that online platforms optimize ad delivery in discriminatory ways. Advertisers who use them could be breaking the law.

Read more
Jon Russell |flickr

Story, 17 September 2020

Female historians and male nurses do not exist, Google Translate tells its European users

An experiment shows that Google Translate systematically changes the gender of translations when they do not fit with stereotypes. It is all because of English, Google says

Read more
Ivan Aleksic |Unsplash

Story, 12 August 2020

Algorithmic grading is not an answer to the challenges of the pandemic

Graeme Tiffany is a philosopher of education. He argues that replacing exams with algorithmic grading, as was done in Great Britain, exacerbates inequalities and fails to assess students' abilities.

Read more
NeONBRAND | Unsplash

Story, 15 June 2020

Undress or fail: Instagram’s algorithm strong-arms users into showing skin

An exclusive investigation reveals that Instagram prioritizes photos of scantily-clad men and women, shaping the behavior of content creators and the worldview of 140 millions Europeans in what remains a blind spot of EU regulations.

Read more

Story, 19 May 2020

Automated moderation tool from Google rates People of Color and gays as “toxic”

A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic than others.

Read more
Luis Villasmil |Unsplash

Story, 28 April 2020

Unchecked use of computer vision by police carries high risks of discrimination

At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them.

Read more
Marko Bugarski|Unsplash

Story, 7 April 2020

Google apologizes after its Vision AI produced racist results

A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.

Read more
Page 2 of 2