AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze automated decision-making (ADM) systems and their impact on society.

Story

1 February 2023

#aiact #regulation #spain

What to expect from Europe’s first AI oversight agency

Spain announced the first national agency for the supervision of Artificial Intelligence. In its current shape, the plan is very industry-friendly and leaves little space to civil society.

CC-BY N O E L | F E A N S

New project: Auditing Algorithms for Systemic Risks

Assessing algorithmic systems is an indispensable step towards the protection of democracy, human rights, and the rule of law. At the moment, it is largely unclear how it can and should be done. With the support of Alfred Landecker Foundation, we are developing ideas, procedures, and best practices to effectively assess algorithms for systemic risks, test them in practice, and advocate for their adoption.

Blog

12 January 2023

#jobs #team

Join AlgorithmWatch as our new Project Lead “Auditing Algorithms for Systemic Risks”

Part-time (30-32 h per week) | Starting date: March or April 2023 | Deadline: 13 February 2023

Photo: fauxels | pexels

What does TikTok know about you? Data donations deliver answers!

Companies like Facebook, Instagram, Google, and TikTok often know about the harmful effects of their algorithmic systems and yet continue to prevent independent research on them. Data donations like DataSkop are one of the few ways to investigate opaque algorithms.

Khari Slaughter for AlgorithmWatch, CC BY 4.0

Blog

12 January 2023

#jobs #team

Apply now as PR & Outreach Manager at AlgorithmWatch!

Full- or part-time, 30-40 h per week | Starting date: as soon as possible | Deadline for applications: 12 February 2023

Photo: Mikhail Nilov | Pexels

Position

6 December 2022

#aiact #eu #germany

How the German government decided not to protect people against the risks of AI

Today, EU member states’ ministers are meeting in Brussels to formally adopt the version of the AI Act that their governments agreed to. As it stands, it will not be in line with the German government’s vows to protect fundamental rights. Instead, it will propose lavish exemptions for security agencies to the detriment of people’s rights.

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

7 September 2022

AutoCheck workshops on Automated Decision-Making Systems and Discrimination

Understanding causes, recognizing cases, supporting those affected: documents for implementing a workshop.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

Foto von Mika Baumeister auf Unsplash

12 January 2023

Does a simple algorithm help against domestic violence?

Police officers in the German state of Mecklenburg-Western Pomerania predict the likelihood of a future incident of domestic violence using ODARA, a Canadian tool with unproven reliability. It helps cooperation with social workers, but does not work all the time.

Read more

Positions

Read our responses and expert analyses on current political and regulatory developments concerning automated decision-making. Read our positions

Tara Winstead, An Artificial Intelligence Illustration for Pexels

31 January 2023

Civil society observers call for an effective Council of Europe Convention on AI

On the occasion of the International Data Protection Day on 28 January, AlgorithmWatch and six other civil society organizations remind the Council of Europe of its mandate in negotiating a global Convention on AI.

Read more

Blog

Upcoming events, campaigns and updates from our team – here is where you get your news about AlgorithmWatch. Visit our blog

Photo by Christopher Gower on Unsplash

4 January 2023

AlgorithmWatch welcomes first fellows in algorithmic accountability reporting

After the announcement that we offer a fellowship in algorithmic accountability reporting, we received over 100 applications and were overwhelmed by the quality and diversity of them. After a careful examination, we’re very happy to introduce our first six fellows, an outstanding group of journalists, academics, and a civil society activist.

Read more

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

About the project, 1 April 2021

SustAIn: The Sustainability Index for Artificial Intelligence

How sustainable is Artificial Intelligence? At this point in time, we are aware that a lot of energy is needed in the development and application of AI, that so-called click workers classify data sets for training AI systems under very poor working conditions and that these systems quite often reinforce existing patterns of discrimination. In view of an increasing number of AI systems in use, there is an urgent need to discuss how AI can be made more sustainable. This is where our project SustAIn comes in.

Read more