AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze automated decision-making (ADM) systems and their impact on society.

AI and the Challenge of Sustainability: The SustAIn Magazine’s new edition

In the second issue of our SustAIn Magazine, we show what prevents AI systems from being sustainable and how to find better solutions.

Story

10 March 2023

#instagram

Reels of Fortune: Instagram-shaped memories for a bigger reach

The algorithm used to do it for us, now we do it for the algorithm: Platforms seek data on what people think good memories are. One user tells us how she constructs an end-of-year Recap Reel on Instagram.

Foto von Jon Tyson auf Unsplash

New study on AI in the workplace: Workers need control options to ensure co-determination

Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace. Such systems are used by many companies for automated decision-making (ADM) already. They are often based on Machine Learning (ML) algorithms. The European Union’s Artificial Intelligence Act (AI Act) draft is designed to safeguard workers’ rights, but such legislative measures won’t be enough. An AlgorithmWatch study funded by the Hans Böckler Foundation explains how workers’ co-determination can be practically achieved in this regard.

Story

23 February 2023

#work

A dollar for your face: Meet the people behind Machine Learning models

To train machine learning models, tech companies are hiring a Germany-based service provider to buy selfies and pictures of ID cards from underpaid gig workers, whose rights are often disregarded.

Foto von Adam Birkett auf Unsplash

Risky business: How do we get a grip on social media algorithms?

Since personalized recommender systems have the power to influence what we think and do, we need to understand the risks they pose to our society. The DSA’s new transparency regime is a promising step forward, but we still need external, adversarial audits by independent research facilities.

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

23 February 2023

New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work

Our report shows that trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators to deal with the challenges that automation puts onto workers.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

CC-BY N O E L | F E A N S

1 February 2023

What to expect from Europe’s first AI oversight agency

Spain announced the first national agency for the supervision of Artificial Intelligence. In its current shape, the plan is very industry-friendly and leaves little space to civil society.

Read more

Positions

Read our responses and expert analyses on current political and regulatory developments concerning automated decision-making. Read our positions

7 March 2023

France: the new law on the 2024 Olympic and Paralympic Games threatens human rights

France proposed a new law on the 2024 Olympic and Paralympic Games (projet de loi relatif aux jeux Olympiques et Paralympiques de 2024) which would legitimize the use of invasive algorithm-driven video surveillance under the pretext of “securing big events”. This new French law would create a legal basis for scanning public spaces to detect specific suspicious events.

Read more

Blog

Upcoming events, campaigns and updates from our team – here is where you get your news about AlgorithmWatch. Visit our blog

Foto von Brett Jordan auf Unsplash

16 February 2023

Platforms’ promises to researchers: first reports missing the baseline

An initial analysis shows that platforms have done little to “empower the research community” despite promises made last June under the EU’s revamped Code of Practice on Disinformation.

Read more

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

19 January 2023

New project: Auditing Algorithms for Systemic Risks

Assessing algorithmic systems is an indispensable step towards the protection of democracy, human rights, and the rule of law. At the moment, it is largely unclear how it can and should be done. With the support of Alfred Landecker Foundation, we are developing ideas, procedures, and best practices to effectively assess algorithms for systemic risks, test them in practice, and advocate for their adoption.

Read more