Blog

An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.

Topic overview

Resource consumption of AI: The insatiable industry and its costs

Artificial intelligence does not simply fall from the sky. Its development, hardware production and everyday operation consume vast amounts of electricity, water and other resources. Yet tech companies remain reluctant to disclose the true costs to people and the environment. Here, we provide an overview of the issue.

Black and white photograph of a newspaper rack displaying German daily newspapers including Die Zeit and Süddeutsche Zeitung

Are Google AI Overviews killing media pluralism? AlgorithmWatch amongst first organizations to investigate that.

Google could destroy the web traffic which is the life-blood of organisations who produce reliable information. This is thanks to their new AI Overviews – a tool that hides links to real websites, in favour of their own summaries which have (for example) claimed Olaf Scholz is still the German Chancellor. AlgorithmWatch is making one of the very first data access requests under new EU rules to investigate these systemic risks to media freedom.

Image text: Maria Exner, Meredith Whittaker, Matthias Spielkamp.

Event

The Individual in the Machine – Meredith Whittaker on reclaiming Privacy in the Age of AI

On 25 September AlgorithmWatch invited to join an exclusive talk with Meredith Whittaker, President of the messenger platform, Signal. Meredith Whittaker, Maria Exner (Publix) und Matthias Spielkamp (AlgorithmWatch) discussed what we must do to bring technology in line with human needs - particularly in protecting the privacy of individuals from powerful platforms and operating system providers.

Automation on the Move

Border Surveillance on the Move to Enforce Restrictive Measures

In two recent Horizon Europe research projects, adaptable and mobile AI-based surveillance assemblages are developed to secure both the external and internal borders of the European Union. AlgorithmWatch looked into project material that revealed a lopsided fixation on defense.

Automation on the Move

The EU Spends Big on Border Tech — But Has No Idea What It Gets

The European Commission claims not to monitor if research findings of EU-funded projects are applied to the market after they have ended, AlgorithmWatch found. As no EU institution seems to be responsible for checking on border security investments, it is hard to tell if the millions spent actually lead to technological innovation.

Let’s Stop Nudification Apps Together!

Non-consensual nudity services are a horrifying use of AI. AlgorithmWatch is trying to use the EU’s Digital Services Act to limit the spread of non-consensual nudity services on social media and app stories. But platforms like X are standing in our way. What are our next steps and how you can help

A neural network comes out of the top of an ivory tower, above a crowd of people's heads (shown in green to symbolise grass roots). Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.

Open call to apply for AlgorithmWatch’s reporting fellowship on AI and power

For a fifth time, AlgorithmWatch is looking for new Algorithmic Accountability Reporting fellows. Apply now if you have research ideas concerning the relation between Artificial Intelligence and power and its consequences. The application deadline is 15 September 2025.

Pride With Pride! Stop Mass Surveillance at Pride, Stop Face Recognition Now

Report algorithmic discrimination!

When we apply for credit, apartments, or jobs online, companies increasingly use automated systems to process our data and make decisions that impact our daily lives. The problem: Such systems are not neutral and can reproduce inequalities and assumptions about people that already exist in society. What can we do to ensure that the use of non-transparent automated systems does not lead to people being disadvantaged? We need to make algorithmic discrimination visible, and you can help us with it!

What is algorithmic discrimination?

Discrimination and Artificial Intelligence (AI): Here's an overview of the topic.

Explainer: Predictive Policing

Algorithmic Policing: When Predicting Means Presuming Guilty

Algorithmic policing refers to practices with which it is allegedly possible to “predict” future crimes and detect future perpetrators by using algorithms and historical crime data. We explain why such practices are often discriminatory, do not hold up to what they promise, and lack a legal justification.

The Musk Effect: X’s impact on Germany’s election

AlgorithmWatch and the DFRLab have produced new research on X during German elections. We analyzed X posts by German politicians as well as prominent anti-far-right organizations, and found that the most viral posts are dominated by references to Elon Musk and his support for the AfD.

AI Action Summit in Paris – a missed opportunity?

Our Executive Directors, Angela Müller and Matthias Spielkamp, were in Paris last week representing us at the international AI Action Summit in Paris hosted by the French Government. So, what to make of the summit, the billion-dollar promises made at it, and the Big Tech party beats that never stop pounding? These are their main observations.

Simple network illustration.

Explainer: AI Energy Consumption

Fighting the Power Deficiency: The AI Energy Crisis

Is AI contributing to solving the climate crisis or to making it worse? Either way, the increase in AI applications goes hand in hand with the need for additional data centers, for which energy resources are currently lacking.

As of February 2025: Harmful AI applications prohibited in the EU

Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

Mark Zuckerberg stepping towards a big X

Zuckerberg Makes Meta Worse to Please Trump

With his decision to gut moderation and fact-checking on Meta’s platforms, Instagram, Facebook and Threads, Mark Zuckerberg shows he cares more about the approval of Donald Trump than how his platforms can harm society.

Podcast False Positives Mockup with Mobile

False Positives: A Podcast on Financial Discrimination & De-banking

AlgorithmWatch and Agence France-Presse (AFP) released a podcast on automated discrimination in the financial sector, based in a six-month long investigation conducted in the framework of our Algorithmic Accountability Reporting fellowship.

A Year of Challenging Choices – 2024 in Review

2024 was a "super election" year and it marked the rise of generative Artificial Intelligence. With the adoption of the AI Act, it seemed poised to be the moment we finally gained control over automated systems. Yet, that certainty still feels out of reach.

Give a Meaningful Gift

Looking for a gift for a good cause? Give a donation to your loved ones in favour of AlgorithmWatch and support the fair use of AI and algorithms.

Algorithmic Accountability Reporting Fellowship

New Cohort of Fellows to Research the Political Economy Behind AI

Two years after launching our Algorithmic Accountability Reporting Fellowship, we are excited to introduce a new cohort of journalists and data scientists who will work on stories about the foundations of Artificial Intelligence and its supply chain.

Explainer: Biometric recognition systems

Show Your Face and AI Tells Who You Are

Biometric recognition technologies can identify and monitor people. They are supposed to provide more security, but they put fundamental rights at risk, discriminate, and can even pave the way to mass surveillance.

Automation on the Move

The Automation of Fortress Europe: Behind the Black Curtain

The European Union poured 5 million euros into the development of a border surveillance system called NESTOR. When we tried to look into it, we were presented hundreds of redacted, blacked out pages.

Automation on the Move

Blurred Lines: When Civilian Research Projects Become of Military Interest

The EU does not fund border security research projects that mainly target military applications. Or do they? AlgorithmWatch found that in the realm of border security, civilian applications appeal enormously to the military.

Automation on the Move

Automating EU Borders, Broken Checks and Balances

For over a year, we have been looking into EU-funded border security research projects to assess their methodological approaches and ethical implications. We failed.

Automation on the Move

EMERALD – The One That Fell from Grace

Only one proposed border security research project did not meet the EU’s ethical requirements and was rejected, AlgorithmWatch found. What was so unique about this surveillance system?