The year the wrong Amazon burnt: 2019 in review

As the hype around “artificial intelligence” leveled off, the impact of automated decision-making made itself seen. Regulators and civil society fought hard to rein in Big Tech, but much remains to be done to achieve a good balance.

Nicolas Kayser-Bril
Reporter

2019 saw the climate catastrophe continue apace, heralded by the 10,000 square kilometers of Amazon rainforest that were burnt in the first nine months of the year. The other Amazon, further north, published for the time a review of their carbon footprint. Reading between the lines, it shows that its cloud-computing service, which is used to train many machine-learning algorithms, is responsible for roughly three times as much carbon dioxide emissions per dollar of sales as the rest of its businesses.

Counting the climate cost

Several research articles pointed to the large amount of greenhouse gas emissions required to sustain “artificial intelligence.” A “transformer” model (a type of neural network used for instance in automated translation) trained in January produced about 300 tons of carbon dioxide, according to a review by US researchers. If civilization is to survive the 21st century, each person should emit no more than one ton of carbon dioxide per year, according to the reports of the Intergovernmental Panel on Climate Change (IPCC), a group of several thousand scientists.

The hype around “artificial intelligence” plateaued in 2019, and not only because of climate concerns. In November, the US National Transportation Safety Board published their review of a fatal crash involving an automated Uber vehicle. It turned out the software driving the car had not been programmed to take into account pedestrians walking on the road. This investigation, among other setbacks, made it unlikely that swarms of self-driving cars will take over European roads any time soon.

Automated recruiting

Other sectors seem less impacted by this backlash. In recruiting, several companies offer to automate the assessment of job-seekers, using video or text samples to create “psychometric” profiles that are supposed to predict job performance. Critics, including Princeton professor Arvind Narayanan, call these “sophisticated random-number generators.” AlgorithmWatch reported on German examples.

Automated decision-making continued its progress in European public services. The police forces of Austria and Sweden were granted permission to use face recognition to automatically match a suspect’s face against databases of people thought to be more likely to engage in criminal activity, such as previous suspects or asylum seekers.

The police forces of at least ten countries in the EU are already using face recognition. In the United States, an investigation published in May showed that police personnel might go to great length to obtain a match from their software, such as using the face of a famous actor lookalike instead of the suspect’s face if the latter produced no result.

Benefits distribution

Welfare delivery remains the area where automation progresses fastest. New cases came to light where citizens' applications are reviewed not by caseworkers but by algorithms. AlgorithmWatch reported on examples in Spain, Austria and Sweden, among others.

The Automating Society report we published in January assessed, for the first time, the situation in twelve European countries (a second edition of the report will be published in 2020). Most of the examples reviewed lack in transparency and accountability.

Counterweights

2019 was also the year when civil society was able to act as a counterweight to other stakeholders in algorithmic decision-making. AlgorithmWatch, which grew from 7 to 13 team members over the year, took part in an investigation that led to the closure of a bogus recruitment tool in Finland, for instance.

In the Netherlands, several non-profits sued the government over the use of SyRI, a tool that automatically flags people suspected of welfare fraud based on an analysis of multiple data sources.

While Dutch judges are unlikely to rule against SyRI, the European Court of Human Rights would probably strike it down as a tool for “disproportionate surveillance,” Mercedes Bouter, a lawyer specialized in privacy and technology, told AlgorithmWatch, because its use would be in violation of Article 8 of the European Convention of Human Rights.

If this were to happen in the next few years, other ongoing cases would be impacted. In Sweden, SSR, a union, started a procedure against the city of Trelleborg, which uses an algorithm to decide on the distribution of welfare payments. The data sources it processes range from tax returns to transport and employment records – likely a “disproportionate” data gathering in the sense of Article 8, Ms Bouter said.

Registries

A first step to better control automated decision-making systems would be to list them. AlgorithmWatch encourages the creation of mandatory registers for algorithmic systems used in the public sector. In the Netherlands, the lower house of parliament passed a motion for such a registry in September. The government should come back with proposals early next year.

The second step would be to set up efficient mechanisms to control algorithms, whether in the public or private sector. Several proposals were made in France and Germany on that front, ranging from small task-forces to larger administrations.

Concrete plans have yet to be made, but the new European Commission promised to make “AI regulation” their priority. 2020 could be a pivotal year for automated decision-making.


"The wrong Amazon is burning" was a slogan widely used at the Friday For Future protests throughout 2019.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.