Automated moderation tool from Google rates People of Color and gays as “toxic”

What we do

AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes.
Latest Articles


Estonia: A city is automating homes to reduce energy consumption

The city of Tartu installed automated systems in old housing blocks. Using nudges, sensors and automated decision-making, it hopes to reduce energy consumption by two-thirds.


Unchecked use of computer vision by police carries high risks of discrimination

By Nicolas Kayser-Bril • • GPG Key At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimi…


In the realm of paper tigers – exploring the failings of AI ethics guidelines

AlgorithmWatch upgraded its AI Ethic Guidelines Global Inventory by revising its categories and adding a search and filter function. Of the more than 160 guidelines we compiled, only a fraction have t…


Finland: How to unionize when your boss is an algorithm and you’re self-employed

A group of Finnish couriers launched the Justice4Couriers campaign in 2018. Although they are technically self-employed, they must obey the whims of their platform’s algorithm. They are fighting back.


This man had his credit score changed from C to A+ after a few emails

Auf Deutsch lesen: Wie ein Schreibfehler die Kreditwürdigkeit senken kann By Nicolas Kayser-Bril • • GPG Key A 52-year-old man in Hanover, Germany, discovered that he’d been err…


In Spain, the VioGén algorithm attempts to forecast gender violence

By Michele Catanzaro This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the report i…


Google apologizes after its Vision AI produced racist results

A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.


How Dutch activists got an invasive fraud detection algorithm banned

The Dutch government has been using SyRI, a secret algorithm, to detect possible social welfare fraud. Civil rights activists have taken the matter to court and managed to get public organizations to…


“We must save privacy from privacy itself”

Michele Loi is a post-doctoral researcher at the University of Zurich. He argues that proponents of privacy should not put privacy above health – else risk sliding into irrelevance. Not once I have se…


Germany’s new media treaty demands that platforms explain algorithms and stop discriminating. Can it deliver?

Facebook can’t decide if it’s a tech company, a media company, a telecoms company, or something else entirely. Ahead of talks with European regulators, CEO Mark Zuckerberg said it’s something in betwe…

AlgorithmWatch is supported by