Publications

Ensuring Legitimacy in Stakeholder Engagement: The ‘5 Es’ Framework

The DSA foresees that external stakeholders – such as independent experts, civil society groups, and industry representatives – engage in its rollout and enforcement. To help ensure these processes' legitimacy, we have developed the “5 E’s Framework” encompassing the guiding principles: Equity, Expertise, Effectiveness, Empowering, and Expanding Competencies.

New study: Research on Microsoft Bing Chat

AI Chatbot produces misinformation about elections

Bing Chat, the AI-driven chatbot on Microsoft’s search engine Bing, makes up false scandals about real politicians and invents polling numbers. Microsoft seems unable or unwilling to fix the problem. These findings are based on a joint investigation by AlgorithmWatch and AI Forensics, the final report of which has been published today. We tested if the chatbot would provide factual answers when prompted about the Bavarian, Hessian, and Swiss elections that took place in October 2023.

AI and Sustainability

SustAIn Magazine #3 – A Different Take on AI: We Decide What AI Has To Do for Us

The third and final issue of the SustAIn magazine deals with the question of who could take responsibility for steering AI development in the right direction.

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Country Analyses

New Study: Data Practices and Surveillance in the World of Work

Workers are increasingly being digitally surveilled, datafied and algorithmically managed in Italy, Poland, Sweden and the United Kingdom, a qualitative analysis by AlgorithmWatch shows.

Press release

New study on AI in the workplace: Workers need control options to ensure co-determination

Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace. Such systems are used by many companies for automated decision-making (ADM) already. They are often based on Machine Learning (ML) algorithms. The European Union’s Artificial Intelligence Act (AI Act) draft is designed to safeguard workers’ rights, but such legislative measures won’t be enough. An AlgorithmWatch study funded by the Hans Böckler Foundation explains how workers’ co-determination can be practically achieved in this regard.

AI and the Challenge of Sustainability: The SustAIn Magazine’s new edition

In the second issue of our SustAIn Magazine, we show what prevents AI systems from being sustainable and how to find better solutions.

New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work

Our report shows that trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators to deal with the challenges that automation puts onto workers.

AutoCheck workshops on Automated Decision-Making Systems and Discrimination

Understanding causes, recognizing cases, supporting those affected: documents for implementing a workshop.

Policy Brief: Our recommendations for strengthening data access for public interest research

New magazine on how sustainable AI can be put into practice

The environmental, social and economic sustainability costs of AI urgently need to be addressed by academia, industry, civil society and policy makers – based on evidence. With the first edition of our SustAIn magazine, we hope to fuel this debate.

How to combat algorithmic discrimination? A guidebook by AutoCheck

We are faced with automated decision-making systems almost every day and they might be discriminating, without us even knowing about it. A new guidebook helps to better recognize such cases and support those affected.

Data altruism: how the EU is screwing up a good idea

In a new AlgorithmWatch discussion paper, data regulation expert Winfried Veil argues that the Data Governance Act is not only annoyingly bureaucratic, but a missed opportunity to breathe life into the “Data for Good” idea.

Sustainability criteria for Artificial Intelligence – evaluation approach presented

The sustainability of AI systems is sometimes not in good shape. Among other things, AI systems are discussed for issues relating to discrimination, high energy and resource consumption, as well as the reproduction of social inequalities. In a new publication, the "SustAIn" project has presented comprehensive criteria for a systematic sustainability assessment of AI-based systems – eventually with the aim to promote more sustainable AI.

Tracing The Tracers 2021 report: Automating COVID responses

Final project report

Domestic COVID certificates: what does the evidence say?

Born to help reopen international travel routes, digital COVID certificates are now required in several countries to enter premises such as bars, restaurants, gyms, pools, and museums, and to attend large public events. But do they work — and what for, precisely? More fundamentally, is it even possible to have an evidence-based debate about them at all? Tracing The Tracers looked at the lessons we should learn from the available literature, with the help of a stellar group of researchers.

Making sense of digital contact tracing apps for the next pandemics

In an interview with AlgorithmWatch, Prof. Susan Landau discusses why we need to resist fear in the face of pandemic uncertainty and the normalization of health surveillance technologies — and why the time to have a broad democratic discussion about their future uses is now.

Digital contact tracing apps: do they actually work? A review of early evidence

Throughout the COVID-19 pandemic, many smartphone apps were launched to complement and augment manual contact tracing efforts without a priori knowledge of their actual effectiveness. A year later, do we know if they worked as intended? An analysis of early evidence—from both the literature and actual usage—by AlgorithmWatch finds that results, so far, are contradictory and that comparability issues might prevent an informed, overall judgment on the role of digital contact tracing apps in response to the COVID-19 pandemic altogether.

Automated Decision-Making Systems in the Public Sector – An Impact Assessment Tool for Public Authorities

How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch and AlgorithmWatch Switzerland developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Towards accountability in the use of Artificial Intelligence for Public Administrations

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.

In Italy, general practitioners and some regions adopt COVID-19 vaccine prioritization algorithms

Amid a chaotic rollout of the national vaccination plan, the Italian Federation of General Practitioners (FIMMG) and some regions in Italy are resorting to algorithms to more efficiently priorities who gets vaccinated against COVID-19.

HR Puzzle

A simulation on machine learning and so-called "artificial intelligence" in human resources management.

People analytics in the workplace – how to effectively enforce labor rights

Introduction and recommendations

Analysis: Digital vaccine certificates – global patchwork, little transparency

A global debate has sparked around the idea of implementing a digital infrastructure to prove a person’s COVID-19 vaccination status across borders. But as the initiatives multiply across Europe and all over the globe, an international consensus is hard to reach — and issues still abound.

Automating Society 2020 – Country issues Germany, France, Italy, Switzerland & Spain

Find all country issues of the Automating Society 2020 report and videos of the launch events on this page.