Blog

The Musk Effect: X’s impact on Germany’s election

AlgorithmWatch and the DFRLab have produced new research on X during German elections. We analyzed X posts by German politicians as well as prominent anti-far-right organizations, and found that the most viral posts are dominated by references to Elon Musk and his support for the AfD.

AI Action Summit in Paris – a missed opportunity?

Our Executive Directors, Angela Müller and Matthias Spielkamp, were in Paris last week representing us at the international AI Action Summit in Paris hosted by the French Government. So, what to make of the summit, the billion-dollar promises made at it, and the Big Tech party beats that never stop pounding? These are their main observations.

Simple network illustration.

Explainer: AI Energy Consumption

Fighting the Power Deficiency: The AI Energy Crisis

Is AI contributing to solving the climate crisis or to making it worse? Either way, the increase in AI applications goes hand in hand with the need for additional data centers, for which energy resources are currently lacking.

As of February 2025: Harmful AI applications prohibited in the EU

Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

Mark Zuckerberg stepping towards a big X

Zuckerberg Makes Meta Worse to Please Trump

With his decision to gut moderation and fact-checking on Meta’s platforms, Instagram, Facebook and Threads, Mark Zuckerberg shows he cares more about the approval of Donald Trump than how his platforms can harm society.

Podcast False Positives Mockup with Mobile

False Positives: A Podcast on Financial Discrimination & De-banking

Agence France-Presse released a podcast that was based on a six-month investigation conducted in the framework of the AlgorithmWatch Algorithmic Accountability Reporting fellowship.

A Year of Challenging Choices – 2024 in Review

2024 was a "super election" year and it marked the rise of generative Artificial Intelligence. With the adoption of the AI Act, it seemed poised to be the moment we finally gained control over automated systems. Yet, that certainty still feels out of reach.

Give a Meaningful Gift

Looking for a gift for a good cause? Give a donation to your loved ones in favour of AlgorithmWatch and support the fair use of AI and algorithms.

Algorithmic Accountability Reporting Fellowship

New Cohort of Fellows to Research the Political Economy Behind AI

Two years after launching our Algorithmic Accountability Reporting Fellowship, we are excited to introduce a new cohort of journalists and data scientists who will work on stories about the foundations of Artificial Intelligence and its supply chain.

Explainer: Biometric recognition systems

Show Your Face and AI Tells Who You Are

Biometric recognition technologies can identify and monitor people. They are supposed to provide more security, but they put fundamental rights at risk, discriminate, and can even pave the way to mass surveillance.

Automation on the Move

The Automation of Fortress Europe: Behind the Black Curtain

The European Union poured 5 million euros into the development of a border surveillance system called NESTOR. When we tried to look into it, we were presented hundreds of redacted, blacked out pages.

Automation on the Move

Automating EU Borders, Broken Checks and Balances

For over a year, we have been looking into EU-funded border security research projects to assess their methodological approaches and ethical implications. We failed.

Automation on the Move

EMERALD – The One That Fell from Grace

Only one proposed border security research project did not meet the EU’s ethical requirements and was rejected, AlgorithmWatch found. What was so unique about this surveillance system?

Why we need to audit algorithms and AI from end to end

The full picture of algorithmic risks and harms is a complicated one. So how do we approach the task of auditing algorithmic systems? There are various attempts to simplify the picture into overarching, standardized frameworks; or focus on particular areas, such as understanding and explaining the “black box” of models. While this work and thinking have benefits, we need to look at systems from end to end to fully capture the reality of algorithmic harms.

Conference

A Civil Society Summit on Tech, Society, and the Environment

At the “Civil Society Summit on Tech, Society, and the Environment” convened by EDRi, AlgorithmWatch and more than 100 civil society partners and digital rights organizations from around the world come together with EU policymakers to foster digital rights in the EU and create accountability for the good of the people.

In the run-up to the German federal state elections:

Chatbots are still spreading falsehoods

In September 2024, federal state elections will be held in Thuringia, Saxony, and Brandenburg. AlgorithmWatch and CASM Technology have tested whether AI chatbots answer questions about these elections correctly and unbiased. The result: They are not reliable.

AlgorithmWatch is running a new round of its Algorithmic Accountability reporting fellowship

You can now apply to the fourth round of AlgorithmWatch's Algorithmic Accountability Reporting Fellowship. Running since January 2023, the program has connected journalists and researchers from across Europe and unveiled new stories about automated discrimination on the continent.

Explainer: AI in the workplace

Managed by the algorithm: how AI is changing the way we work

Automated decision-making systems control our work, whether in companies or via platforms that allocate jobs to independent contractors. Companies can use them to increase their efficiency, but such systems have a downside: They can also be used to surveil employees and often conceal the exploitation of workers and the environment.

Explainer: AI & Sustainability

Sustainable AI: a contradiction in terms?

Does Artificial Intelligence (AI) help tackle the climate crisis or is it an environmental sin worse than flying? We explain how much energy AI systems really consume, why we need better measurements, how AI can become more sustainable, and what all this has to do with the EU's AI Act.

Explainer: AI & labor law

What works and what doesn’t: AI systems and labor law

In many companies, employees are controlled by automated systems, especially in platform work. The use of such systems in the (virtual) workplace is not yet comprehensively regulated by law. New (draft) laws are intended to help workers protect their rights.

Explainer: Algorithmic discrimination

How and why algorithms discriminate

Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.

10 Questions about AI and elections

2024 is an important election year. While citizens all over the world get ready to cast their ballot, many people worry about AI: Are chatbots and fake images a threat to democracy? Can we still trust what we see online? We explain why the hype around AI and elections is somewhat overblown and which real risks we need to watch out for instead.

Campaign: ADM and People on the Move

Borders without AI

29,000 people have died in the Mediterranean over the past ten years while trying to reach the EU. You would think that the EU wanted this tragedy to stop and scientists across Europe were working feverishly on making this happen with the latest technology. The opposite is the case: With the help of so-called Artificial Intelligence, the walls are being raised, financed with taxpayers' money.

Explainer: ADM in the public sector

The algorithmic administration

Automating administration processes promises efficiency. Yet the procedures often put vulnerable people at a disadvantage, as shown by a number of examples throughout Europe. We’ll explain why using automation systems in the domain of public administration can be especially problematic and how risks may be detected at an early stage.

EU’s AI Act fails to set gold standard for human rights

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. Here’s our round-up of how the final law fares against our collective demands.