story

Undress or fail: Instagram’s algorithm strong-arms users into showing skin

What we do

AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes.
Latest Articles

story

Can AI mitigate the climate crisis? Not really.

Several institutions claim that AI will contribute to solving the climate crisis, but evidence is scant. On the contrary, AI has a track record of helping emit more greenhouse gases.

position

Our response to the European Commission’s consultation on AI

Read our response to the European Commission's White Paper on Artificial Intelligence, submitted as part of the public consultation on AI.

publication

The only way to hold Facebook, Google and others accountable: More access to platform data

An effective regulatory framework for intermediaries cannot be achieved without meaningful transparency in the form of data access for journalists, academics and civil society actors. This is the resu…

position

Automated decision-making systems and the fight against COVID-19 – our position

by AlgorithmWatch – also available in German, French (Framablog)* and Italian (KRINO)*. As the COVID-19 pandemic rages throughout the world, many are wondering whether and how to use automated decisio…

story

Ten years on, search auto-complete still suggests slander and disinformation

By Nicolas Kayser-Bril • kayser-bril@algorithmwatch.org • GPG Key After a decade and a string of legal actions, an AlgorithmWatch experiment shows that search engines still suggest slanderous, false a…

story

Automated moderation tool from Google rates People of Color and gays as “toxic”

By Nicolas Kayser-Bril • nkb@algorithmwatch.org • GPG Key A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic t…

story

Unchecked use of computer vision by police carries high risks of discrimination

By Nicolas Kayser-Bril • nkb@algorithmwatch.org • GPG Key At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimi…

project

In the realm of paper tigers – exploring the failings of AI ethics guidelines

AlgorithmWatch upgraded its AI Ethic Guidelines Global Inventory by revising its categories and adding a search and filter function. Of the more than 160 guidelines we compiled, only a fraction have t…

story

Finland: How to unionize when your boss is an algorithm and you’re self-employed

A group of Finnish couriers launched the Justice4Couriers campaign in 2018. Although they are technically self-employed, they must obey the whims of their platform’s algorithm. They are fighting back.

story

This man had his credit score changed from C to A+ after a few emails

Auf Deutsch lesen: Wie ein Schreibfehler die Kreditwürdigkeit senken kann By Nicolas Kayser-Bril • nkb@algorithmwatch.org • GPG Key A 52-year-old man in Hanover, Germany, discovered that he’d been err…

story

In Spain, the VioGén algorithm attempts to forecast gender violence

By Michele Catanzaro This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the report i…

story

Google apologizes after its Vision AI produced racist results

A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.

AlgorithmWatch is supported by