The year that was not saved by automated systems – 2021 in review

A climate catastrophe in Germany and the revelations of the Facebook Files had one thing in common: the humans in the loop failed to take the right decisions. 2021 was not the year algorithms were reined in, but 2022 might be.

Nicolas Kayser-Bril
Head of Journalism (on parental leave)

Rhineland-Palatinate is one of Germany’s 16 regions, or Länder. Its best claim to international fame is perhaps the Lorelei, a steep rock overlooking the Rhine river. Floods are a constant worry in this mountainous region, and the Land’s environment agency runs an automated surveillance network. It manages between 290 and 300 sensors at 145 gauging stations, which measure water levels on each of the region’s major rivers. It also uses data from a further 120 gauging stations that belong to the German federal government or, for upstream monitoring, neighboring countries.

Every five minutes, each station automatically measures the water level. Every 15 minutes, a program computes the average of the last three measurements and sends it to the database of the Rhineland-Palatinate environment agency.

Twice a day, an automated system runs a simulation using LARSIM (Large Area Runoff Simulation Model), an algorithm the agency developed together with neighboring environment authorities, and provides a forecast of water flows. Whenever there is a risk of flooding, LARSIM is run every three hours. When a river is forecast to overflow over a manually-set limit, another system sends an e-mail, complete with graphics, to the local authorities where a flood is likely.

Yet another automated system sends data to KATWARN, a mobile app that was downloaded by millions of people in Germany and that alerts them in case of an emergency (alerts can also be seen on a website).

Both systems can be overruled by humans who can relaunch simulations at any time, for instance when the precipitation or water flows they see do not match the previous forecast.

On 8 July 2021, a severe depression was forecast to hit Rhineland-Palatinate the next week. On 14 July, the system connected to LARSIM sent automated e-mails to the local authorities around the Ahr river, warning of imminent danger of severe flooding. E-mails were sent at 15:26, 18:26 and 21:26. On the same day, at 11:17, the agency’s systems sent a “red” alert to KATWARN, which was upgraded to “violet”, the highest alert level, at 17:17.

And then nothing happened.

As water rose in the Ahr valley, the local administration (Kreisverwaltung), which has sole authority in the matter, did not issue an evacuation order until 23:09. By then, the Ahr river was already about 5 meters over its normal level (the exact value is unknown as flowing water ripped the sensors apart shortly after 20:45).

That night, 134 persons died along the Ahr river, in what was probably the worst catastrophe in the valley in the last two centuries. Prosecutors are investigating a case of negligent homicide against the head of the local administration.

Humans in the loop

On 14 July 2021, a well-oiled automated system performed the task it had been built for. Human decision-makers then failed to take action.

Automated systems and Artificial Intelligence have long been hailed as silver bullets against the climate crisis. 2021 showed that action by politicians was probably more important.

World leaders met in Glasgow in November and, for the 26th time, failed to solve the climate crisis. Hitachi, a sponsor of the conference, kept serving the AI Kool-Aid (“Can technology and data fight climate change? The answer is an unequivocal YES!”, one of its conference presentations said). But others were much more measured. A report commissioned by the Global Partnership on AI* published during the COP26, admitted that “AI can have both positive and negative impacts on climate and the environment”. In the end, humans make this fateful decision.

Google, for instance, pledged in May 2020 to stop making custom AI tools for oil and gas extraction. Then, in December 2020, it announced the launch of a new data center for Google Cloud, its AI offering, in Saudi Arabia. In partnership with Saudi Aramco. An organization that, between 1965 and 2019, contributed close to 60 billion tons of CO2 equivalent to the atmosphere, more than any other in the world. Google and Saudi Aramco declined to tell AlgorithmWatch how the project progressed in 2021.

Climate was not the only area where 2021 made clear who, between humans and algorithms, had the upper hand. In September, a cache of documents dubbed the Facebook Files was leaked to journalists and to the United States Congress. They showed that Facebook executives deliberately tweaked algorithms to maximize revenues even when they knew it endangered users or institutions. In Europe, for instance, Facebook’s newsfeed algorithm encouraged political parties to adopt more extreme positions.

While some commentators found that the company “lost control” over their systems, the leaked documents showed on the contrary that at least some employees knew that algorithms had horrible effects and offered solutions. Which Facebook’s top management nixed.

Tech-solutionists vs tech-realists

In Europe, too, some people kept pushing for more deployments and less oversight of automated systems.

In Greece, an automated systems was deployed to control asylum-seekers in the name of “security” (there is no evidence that the process made anyone more secure). In France, tax authorities launched a new algorithm to detect illegally-built swimming pools (the efficacy of the system is disputed). In Spain, local train operators published a tender to identify passengers by race on video feeds, again in the name of security (linking moral traits to appearance has been denounced as pseudo-science since the 19th century).

But the proponents of many such projects backed down after mobilization from civil society. The Serbian government pushed, then withdrew, a law that would have enabled constant mass surveillance of the population with face recognition. A town in Sweden froze plans to deploy an algorithm to detect children at risk, citing legal concerns. Nissewaard, a Dutch city of 80,000, decided against using an algorithm that gave a 'risk score' to welfare recipients after it was found unreliable. Schools canteens in Scotland stopped using payment by face recognition after parents complained.

Regulation might be on the way

Tribunals across the continent dealt blows to some automated systems. In Sweden, an administrative court upheld a fine imposed by the country’s data protection agency against the national police. They had used ClearviewAI, a tool that claims to automatically identify a person based on a picture. Clearview has been heavily criticized for its carefree attitude to privacy and its imprecision.

In Bologna, Italy, a tribunal found that Deliveroo, a food-delivery service, was in breach of the law. The platform’s algorithm illegally discriminated against workers, rating them as less reliable when they were sick, for instance.

Lawmakers took action, too. In Spain, the “Riders Law” took effect in August. It forces delivery companies such as Uber Eats and Deliveroo to share some details about algorithms that impact working conditions.

At the European level, von der Leyen’s Commission pushed ahead with its agenda of regulating Artificial Intelligence and published a draft AI Act in April. How much weight it will carry remains unclear. While it took a broad approach to AI at first, there are signs that both the Commission and the European Council now want to narrow it in order to regulate only self-learning systems.

Enforcement remains out of sight

However, the material consequences these new developments might be much humbler than their ambition. AlgorithmWatch showed several times in 2021 that illegal, or at least very problematic, automated systems could go unchecked and that no enforcement authority rose up to control them.

In February, we revealed that TomTom and Google Maps’ routing algorithms encouraged car drivers to illegally drive down cycle streets. While both platforms promised to take action, they did not. More importantly, our reporting revealed that no one, from the municipal authorities to the police – considered it a problem.

In August, we revealed that LinkedIn’s recruiting tool automatically rejected out-of-country candidates, which is illegal in the European Union for EU nationals. Here again, LinkedIn promised change. And here again, European and national authorities saw no problem.

The list could go on. In Poland, a new law forces banks to explain why a credit was granted or refused. They do not respect it. In France, the police can only use face recognition in case of absolute necessity. They use it more than 1,000 times a day.

Auditing the machines

There are signs that enforcement could be stepped up. France’s national government launched a task force of 20 data scientists who support regulators in investigating algorithms. The Netherlands are about to start a similar scheme. Such efforts will give public authorities some of the means they need to take their regulatory powers beyond the paper it was written on.

Several civil society organizations attempted to audit the algorithms of large platforms, too. In January, The Markup unveiled their Citizen Browser, a representative, paid panel of the US population that shares detailed information about their Facebook browsing. Based on this data, The Markup could show several times that Facebook’s announcements were inaccurate, or plain lies.

AlgorithmWatch ran several auditing projects. We released DataSkop, a software for data donations. We inaugurated it with an audit of YouTube’s national news channel in the run up to the German general elections. We continued to work on our Instagram Monitoring project, too, in partnership with NOS, Pointer and Groene Amsterdamer in the Netherlands and with Süddeutsche Zeitung in Germany.

But our efforts were cut short by Facebook. In June, they demanded that we stop our experiment. We had no choice but to comply, being unable to sustain a fight against that company. We were not the only targets. Mark Zuckerberg’s enforcers went after New York University researchers. They also attempted to intimidate the maintainer of an open-source library that automates Instagram interactions.

In 2021, lawmakers showed some willingness to rein in automated systems. The upcoming AI Act and Digital Services Act might give regulators and civil society the rights needed to investigate automated services and hold them to account. Ensuring that this translates into actual change for citizens will be a much bigger challenge, for 2022 and beyond.


* The Global Partnership on AI (GPAI) was founded by France and Canada and consists of a group of 150 experts that draft recommendations on how Artificial Intelligence should be used and regulated. AlgorithmWatch’s executive director Matthias Spielkamp is a founding member of the initiative and part of the working group Responsible development and use of AI, which drafted the report, but was not involved in writing it.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.