AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze automated decision-making (ADM) systems and their impact on society.

ChatGPT-like models boom, but small languages remain in shadows

A lack of source material, investment, and commercial prioritization are all holding back the development of generative models and automated moderation for languages spoken in smaller countries and regions.

Photo by Tobias Messer on Unsplash

Meet ChatPal, the European bot against loneliness

Automated chatbots might help patients, but researchers and medical professionals warn that they are not a substitute for professional, in-person therapy.

The EU now has the means to rein in large platforms. It should start with Twitter.

The European Commission today announced the platforms that will have to comply with the strictest rules the Digital Services Act imposes on companies. Twitter has to be on top of its list in enforcing these rules.

Foto von Brett Jordan auf Unsplash

Automation on the Move

With the ‘Automation on the Move’ project, AlgorithmWatch will help to challenge the untenable status quo for people on the move.

Irem Kurt for AlgorithmWatch, CC BY 4.0

Position

17 April 2023

#aiact #gpai

AlgorithmWatch demands the regulation of General Purpose AI in the AI Act

There was a lot of speculation to hear recently about large language models being a doom’s bell. This is just a red herring that distracts from the real harm people suffer from such models. To prevent an escalation of harmful effects, AlgorithmWatch calls on the German Government and the EU to implement an array of effective precautions.

Story

4 April 2023

#lgbtq #twitter

AI surveillance rumors: gay adult content creators face sanctions

Early last month, many gay fetish accounts were charged for distributing online porn to minors, a criminal offense in Germany. Many suspect an automated tool, but no one knows for sure.

AI and the Challenge of Sustainability: The SustAIn Magazine’s new edition

In the second issue of our SustAIn Magazine, we show what prevents AI systems from being sustainable and how to find better solutions.

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

15 March 2023

Press release

New study on AI in the workplace: Workers need control options to ensure co-determination

Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace. Such systems are used by many companies for automated decision-making (ADM) already. They are often based on Machine Learning (ML) algorithms. The European Union’s Artificial Intelligence Act (AI Act) draft is designed to safeguard workers’ rights, but such legislative measures won’t be enough. An AlgorithmWatch study funded by the Hans Böckler Foundation explains how workers’ co-determination can be practically achieved in this regard.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

30 May 2023

Let the Games Begin: France’s Controversial Olympic Law Legitimizes Automated Surveillance Testing at Sporting Events

Building up from a decade of surveillance creep, the new French law is yet another example of how sporting events are used to normalize automated surveillance systems in public spaces.

Read more

Positions

Read our responses and expert analyses on current political and regulatory developments concerning automated decision-making. Read our positions

Photo by wim hoppenbrouwers on Flickr

5 June 2023

Joint statement

A diverse auditing ecosystem is needed to uncover algorithmic risks

The Digital Services Act (DSA) will force the largest platforms and search engines to pay for independent audits to help check their compliance with the law. But who will audit the auditors? Read AlgorithmWatch and AI Forensics' joint feedback to the European Commission on strengthening the DSA’s independent auditing rules via a Delegated Act.

Read more

Blog

Upcoming events, campaigns and updates from our team – here is where you get your news about AlgorithmWatch. Visit our blog

Foto von José Martín Ramírez Carrasco auf Unsplash

20 April 2023

Investigative journalism and algorithmic fairness

Investigating systems that make decisions about humans, often termed algorithmic accountability reporting, is becoming ever more important. To do it right, reporters need to understand concepts of fairness and bias – and work in teams. A primer.

Read more

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

19 January 2023

New project: Auditing Algorithms for Systemic Risks

Assessing algorithmic systems is an indispensable step towards the protection of democracy, human rights, and the rule of law. At the moment, it is largely unclear how it can and should be done. With the support of Alfred Landecker Foundation, we are developing ideas, procedures, and best practices to effectively assess algorithms for systemic risks, test them in practice, and advocate for their adoption.

Read more