AlgorithmWatch is a human rights organization based in Berlin and Zurich. We fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, democracy, and sustainability, but strengthen them.

New study: Research on Microsoft Bing Chat

AI Chatbot produces misinformation about elections

Bing Chat, the AI-driven chatbot on Microsoft’s search engine Bing, makes up false scandals about real politicians and invents polling numbers. Microsoft seems unable or unwilling to fix the problem. These findings are based on a joint investigation by AlgorithmWatch and AI Forensics, the final report of which has been published today. We tested if the chatbot would provide factual answers when prompted about the Bavarian, Hessian, and Swiss elections that took place in October 2023.

9 December 2023

#aiact #eu #regulation

Press release

AI Act deal: Key safeguards and dangerous loopholes

After a negotiation marathon in the global spotlight, EU lawmakers closed a deal late Friday night on the AI Act, the EU’s law on regulating the development and use of Artificial Intelligence systems. The EU Parliament managed to introduce a number of key safeguards to improve the original draft text, but Member States still decided to leave people without protection in situations where they would need it most.

AI and Sustainability

SustAIn Magazine #3 – A Different Take on AI: We Decide What AI Has To Do for Us

The third and final issue of the SustAIn magazine deals with the question of who could take responsibility for steering AI development in the right direction.

Platform regulation

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

Blog

11 October 2023

#reporting

Game

Can you break the algorithm?

AlgorithmWatch releases an online game on algorithmic accountability journalism. Players act as a journalist who researches the details of a social network’s algorithm.

created with MidJourney under supervision of Alexandre Grilletta

Help us fight injustice in hiring!

Donate your CV to fight together against automated discrimination in job application procedures!

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / Licenced by CC-BY 4.0

1 August 2023

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are being conducting at this very moment.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

Foto von Salemi Wenda auf Unsplash

3 January 2024

Work inside the machine: How pre-saves and algorithmic marketing turn musicians into influencers

Streaming platforms allow users to add upcoming tracks to their playlists, in order to listen to them as soon as they are released. While this sounds harmless, it changed the habits of independent musicians, who feel they have to adapt to yet another algorithm.

Read more

Positions

Read our responses and expert analyses on current political and regulatory developments concerning automated decision-making. Read our positions

Wes Cockx & Google DeepMind / Better Images of AI / AI large language models / Licenced by CC-BY 4.0

20 November 2023

Op-Ed

Generative AI must be neither the stowaway nor the gravedigger of the AI Act

Apparently, adoption of the AI Act as a whole is at risk because the EU Council and Parliament are unable to reach a common position on generative AI, with some Member States wanting to exempt generative AI from any kind of regulation. This is highly irresponsible, as it threatens effective prevention of harms caused by AI-driven systems in general.

Read more

Blog

Upcoming events, campaigns and updates from our team – here is where you get your news about AlgorithmWatch. Visit our blog

Simon Dawson / No 10 Downing Street - not edited (CC BY-NC-ND 2.0)

15 November 2023

AI Safety Summit

Missed Opportunities to Address Real Risks

The UK did not need to throw its full weight behind the Frontier Risks narrative - there are other approaches it could have taken.

Read more

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

Khari Slaughter for AlgorithmWatch

5 October 2023

New research

ChatGPT and Co: Are AI-driven search engines a threat to democratic elections?

A new study by AlgorithmWatch and AI Forensics shows that using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea. As their answers to important questions are partly completely wrong and partly misleading, the likes of ChatGPT can be dangerous to the formation of public opinion in a democracy.

Read more