AlgorithmWatch is a human rights organization based in Berlin and Zurich. We fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, democracy, and sustainability, but strengthen them.

AI and Sustainability

SustAIn Magazine #3 – A Different Take on AI: We Decide What AI Has To Do for Us

The third and final issue of the SustAIn magazine deals with the question of who could take responsibility for steering AI development in the right direction.

Platform regulation

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

Plant-identifying apps: good for amateurs, bad for students

Apps that automatically identify plants have become immensely popular among amateur botanists. While they might help them in their hobby, they also made inroads among students and professionals, with potentially serious effects.

Photo by NEOM on Unsplash

Op-Ed

Generative AI must be neither the stowaway nor the gravedigger of the AI Act

Apparently, adoption of the AI Act as a whole is at risk because the EU Council and Parliament are unable to reach a common position on generative AI, with some Member States wanting to exempt generative AI from any kind of regulation. This is highly irresponsible, as it threatens effective prevention of harms caused by AI-driven systems in general.

New research

ChatGPT and Co: Are AI-driven search engines a threat to democratic elections?

A new study by AlgorithmWatch and AI Forensics shows that using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea. As their answers to important questions are partly completely wrong and partly misleading, the likes of ChatGPT can be dangerous to the formation of public opinion in a democracy.

Khari Slaughter for AlgorithmWatch

Some image generators produce more problematic stereotypes than others, but all fail at diversity

Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.

Blog

11 October 2023

#reporting

Game

Can you break the algorithm?

AlgorithmWatch releases an online game on algorithmic accountability journalism. Players act as a journalist who researches the details of a social network’s algorithm.

created with MidJourney under supervision of Alexandre Grilletta

Help us fight injustice in hiring!

Donate your CV to fight together against automated discrimination in job application procedures!

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

1 August 2023

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are being conducting at this very moment.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

"A red droplet on a yellow and blue background" / Adobe Firefly

24 October 2023

Algorithmic blood donations in Ukraine

On paper, Donor.ua solves many of the inefficiencies of blood donorship in Ukraine. It connects people willing to donate with those in need thanks to a matching algorithm. But implementation proves difficult, and the war is not the only reason for it.

Read more

Positions

Read our responses and expert analyses on current political and regulatory developments concerning automated decision-making. Read our positions

Foto vo AJ Colores auf Unsplash

20 September 2023

Civil society calls on the EU to draw limits on surveillance technology

Police and migration authorities must respect fundamental rights when using AI

As AI systems are increasingly used by law enforcement, migration control and national security authorities, the EU Artificial Intelligence Act (AI Act) is an urgent opportunity to prevent harm, protect people from rights violations and provide legal boundaries for authorities to use AI within the confines of the rule of law.

Read more

Blog

Upcoming events, campaigns and updates from our team – here is where you get your news about AlgorithmWatch. Visit our blog

Simon Dawson / No 10 Downing Street - not edited (CC BY-NC-ND 2.0)

15 November 2023

AI Safety Summit

Missed Opportunities to Address Real Risks

The UK did not need to throw its full weight behind the Frontier Risks narrative - there are other approaches it could have taken.

Read more

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

Irem Kurt for AlgorithmWatch, CC BY 4.0

20 April 2023

Automation on the Move

With the ‘Automation on the Move’ project, AlgorithmWatch will help to challenge the untenable status quo for people on the move.

Read more