campaign

Hey researchers! Have you been ‘left on read’ by platforms? Share your stories!

What we do

AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes.
Latest Articles

news

AlgorithmWatch appointed to ‘Global Partnership on AI’

Today was the kick-off of the Global Partnership on AI (GPAI), a founded by the EU Commission and 14 countries. AlgorithmWatch was appointed to the GPAI as one of the very few civil society organizati…

story

Slovenian police acquires automated tools first, legalizes them later

The Slovenian police legalized its use of face recognition 5 years after it started to use it. Despite formal safeguards, no institution can restrain the Interior ministry.

story

Portugal: Automated verification of prescriptions helped crack down on medical fraud

Portugal’s national health service introduced a centralized, automated system to verify medical prescriptions in 2016. One year later, it flagged 20 million euros’ worth of fraud.

publication

AlgorithmWatch Study debunks Facebook’s GDPR Claims

A new study published by AlgorithmWatch shows that the GDPR needn’t stand in the way of meaningful research access to platform data. It looks to health and environmental sectors for best practices in…

story

Can AI mitigate the climate crisis? Not really.

Several institutions claim that AI will contribute to solving the climate crisis, but evidence is scant. On the contrary, AI has a track record of helping emit more greenhouse gases.

position

Our response to the European Commission’s consultation on AI

Read our response to the European Commission's White Paper on Artificial Intelligence, submitted as part of the public consultation on AI.

position

Automated decision-making systems and the fight against COVID-19 – our position

by AlgorithmWatch – also available in German, French (Framablog)* and Italian (KRINO)*. As the COVID-19 pandemic rages throughout the world, many are wondering whether and how to use automated decisio…

publication

The only way to hold Facebook, Google and others accountable: More access to platform data

An effective regulatory framework for intermediaries cannot be achieved without meaningful transparency in the form of data access for journalists, academics and civil society actors. This is the resu…

story

Ten years on, search auto-complete still suggests slander and disinformation

By Nicolas Kayser-Bril • kayser-bril@algorithmwatch.org • GPG Key After a decade and a string of legal actions, an AlgorithmWatch experiment shows that search engines still suggest slanderous, false a…

story

Automated moderation tool from Google rates People of Color and gays as “toxic”

By Nicolas Kayser-Bril • nkb@algorithmwatch.org • GPG Key A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic t…

story

Unchecked use of computer vision by police carries high risks of discrimination

By Nicolas Kayser-Bril • nkb@algorithmwatch.org • GPG Key At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimi…

story

In Spain, the VioGén algorithm attempts to forecast gender violence

By Michele Catanzaro This story is part of AlgorithmWatch’s upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the report i…

story

Google apologizes after its Vision AI produced racist results

A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.

AlgorithmWatch is supported by