Automating Society 2019

For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use. This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.

Introduction

Imagine you’re looking for a job. The company you are applying to says you can have a much easier application process if you provide them with your username and password for your personal email account. They can then just scan all your emails and develop a personality profile based on the result. No need to waste time filling out a boring questionnaire and, because it’s much harder to manipulate all your past emails than to try to give the ‘correct’ answers to a questionnaire, the results of the email scan will be much more accurate and truthful than any conventional personality profiling. Wouldn’t that be great? Everyone wins—the company looking for new personnel, because they can recruit people on the basis of more accurate profiles, you, because you save time and effort and don’t end up in a job you don’t like, and the company offering the profiling service because they have a cool new business model.

When our colleagues in Finland told us that such a service actually exists, our jaws dropped. We didn’t want to believe it, and it wasn’t reassuring at all to hear the company claimed that basically no candidate ever declined to comply with such a request. And, of course, it is all perfectly legal because job-seekers give their informed consent to open up their email to analysis—if you believe the company, that is. When we asked the Finnish Data Protection Ombudsman about it, he wasn’t so sure. He informed us that his lawyers were still assessing the case, but that it would take a couple of weeks before he could give his opinion. Since this came to light just before we had to go to press with this publication, please go to our website to discover what his final assessment is.

Is automation a bad thing?

In many cases, automating manual processes is fantastic. Who wants to use a calculator instead of a spreadsheet when doing complex calculations on large sets of numbers (let alone a pen, paper and an abacus)? Who would like to manually filter their email for spam any more? And who would voluntarily switch back to searching for information on the Web by sifting through millions of entries in a Yahoo-style catalogue instead of just typing some words into Google’s famous little box? And these examples do not even take into account the myriads of automated processes that enable the infrastructure of our daily lives—from the routing of phone calls to automated electricity grid management, to the existence of the Internet itself. So we haven’t just lived in a world of automated processes for a very long time; most of us (we’d argue: all of us) enjoy the many advantages that have come with it.

That automation has a much darker side to it has been known for a long time as well. When in 1957 IBM started to develop the Semi-Automated Business Research Environment (SABRE) as a system for booking seats on American Airlines’ fleet of planes, we can safely assume that the key goal of the airline was to make the very cumbersome and error-prone manual reservation process of the times more effective for the company and more convenient for the customers. However, 26 years later, the system was used for very different purposes. In a 1983 hearing of the US Congress, Robert L. Crandall, president of American Airlines, was confronted with allegations of abusing the system—by then utilised by many more airlines—to manipulate the reservation process in order to favour American Airlines’ flights over those of its competitors. His answer: “The preferential display of our flights, and the corresponding increase in our market share, is the competitive raison d’etre for having created the system in the first place”.

In their seminal paper “Auditing Algorithms”, Christian Sandvig et al. famously proposed to call this perspective Crandall’s complaint: “Why would you build and operate an expensive algorithm if you can’t bias it in your favor?” [IN 1] In what is widely regarded as the first example of legal regulation of algorithmically controlled, automated decision-making systems, US Congress in 1984 passed a little-known regulation as their answer to Crandall’s complaint. Entitled “Display of Information”, it requires that each airline reservation system “shall provide to any person upon request the current criteria used in editing and ordering flights for the integrated displays and the weight given to each criterion and the specifications used by the system’s programmers in constructing the algorithm.”

A changed landscape

If a case like this sounds more familiar to more people today than it did in 1984, the reason is that we’ve seen more automated systems developed over the last 10 years than ever before. However, we have also seen more of these systems misused and criticised. Advances in data gathering, development of algorithms and computing power have enabled data analytics processes to spread out into fields so far unaffected by them. Sifting through personal emails for personality profiling is just one of the examples; from automated processing of traffic offences in France (see p. 70), allocating treatment for patients in the public health system in Italy (p. 88), automatically identifying which children are vulnerable to neglect in Denmark (p. 50), to predictive policing systems in many EU countries—the range of applications has broadened to almost all aspects of daily life.

Having said all this, we argue that there is an answer to the question 'Is automation a bad thing?' And this answer is: It depends. At first glance, this seems to be a highly unsatisfactory answer, especially to people who need to make decisions about the questions of how to deal with the development, like: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens, patients, or employees to deal with this? And so forth. But everyone who has dealt with these questions already knows that it is the only honest answer.

Why automated decision-making instead of Artificial Intelligence?

One of the first hard questions to answer is that of defining the issue. We maintain that the term automated decision-making (ADM) better defines what we are faced with as societies than the term ‘Artificial Intelligence’, even though all the talk right now is about ‘AI’.

Algorithmically controlled, automated decision-making or decision support systems are procedures in which decisions are initially—partially or completely—delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action. This delegation—not of the decision itself, but of the execution—to a data-driven, algorithmically controlled system, is what needs our attention. In comparison, Artificial Intelligence is a fuzzily defined term that encompasses a wide range of controversial ideas and therefore is not very useful to address the issues at hand. In addition, the term ‘intelligence’ invokes connotations of a human-like autonomy and intentionality that should not be ascribed to machine-based procedures. Also, systems that would not be considered Artificial Intelligence by most of today’s definitions, like simple rule-based analysis procedures, can still have a major impact on people’s lives, i.e. in the form of scoring systems for risk assessment.

In this report, we will focus on systems that affect justice, equality, participation and public welfare, either directly or indirectly. By saying systems instead of technologies we point to the fact that we need to take a holistic approach here: an ADM system, in our use of the term, is a socio-technological framework that encompasses a decision-making model, an algorithm that translates this model into computable code, the data this code uses as an input—either to ‘learn’ from it or to analyse it by applying the model—and the entire political and economic environment surrounding its use. This means that the decision itself to apply an ADM system for a certain purpose—as well as the way it is developed (i.e. by a public sector entity or a commercial company), procured and finally deployed—are parts of this framework.

Therefore, when an ADM system like SyRI is used in the Netherlands to detect welfare fraud (see p. 101), we not only need to ask what data it uses, but whether the use of this data is legal. We also need to ask what decision-making model is applied and whether it has a certain problematic bias, i.e. because it used a biased data set or was developed by people with underlying prejudices that were not controlled for. Other questions then arise: why did the government come up with the idea to use it in the first place? Is it because there is a problem that cannot be addressed in any other way, maybe due to its inherent complexity? Is it because austerity measures led to a situation where there are no longer enough case workers, so automation is used as an option to save money? Or is it because of a political decision to increase pressure on poor people to take on low-paying jobs?

The focus of this report

All these aspects need to be taken into account when asking whether automation can help solve a problem. This is why we decided to focus on four different issues in this report:

  • How is society discussing automated decision-making? Here we look at the debates initiated by governments and legislators on the one hand, like AI strategies, parliamentary commissions and the like, while on the other hand we list civil society organisations that engage in the debate, outlining their positions with regard to ADM.
  • What regulatory proposals exist? Here, we include the full range of possible governance measures, not just laws. So we ask whether there are ideas for self-regulation floating around, a code of conduct being developed, technical standards to address the issue, and of course whether there is legislation in place or proposed to deal with automated decision-making.
  • What oversight institutions and mechanisms are in place? Oversight is seen as a key factor in the democratic control of automated decision-making systems. At the same time, many existing oversight bodies are still trying to work out what sectors and processes they are responsible for and how to approach the task. We looked for examples of those who took the initiative.
  • Last but not least: What ADM systems are already in use? We call this section ADM in Action to highlight a lot of examples of automated decision-making already being used all around us. Here, we tried to make sure that we looked in all directions: do we see cases where automation poses more of a risk, or more of an opportunity? Is the system developed and used by the public sector, or by private companies?

The goals of this report

When we set out to produce this report, we had four main goals in mind:

  1. To show that algorithmically driven, automated decision-making (ADM) systems are already in use all over the EU. So far, the discussion around the use of these systems, their benefits and risks, has been dominated by examples from the US: assessing the recidivism risk of criminals determining whether they are released on parole or stay in jail; teachers being fired based on their automatically calculated performance scores; people in minority neighbourhoods paying higher car insurance premiums than people from wealthy areas with the same risk. So we want to make clear what similar and other ADM systems are in use in the EU, in order to better inform the discussion about how to govern their use.
  2. To outline the state of the political discussion not just on the EU level, but also in the member countries. We all know that Europe’s diversity can be a burden when it comes to information flow across borders, especially because of 24 different official languages. So it was clear to us that we needed to change this situation as best we could by providing in-depth research from member countries in a shared language accessible to policy makers on the EU level. We approached this challenge in the best of the Union’s traditions: As a cross-border, trans-disciplinary collaboration, pooling contributors from 12 different countries who speak their countries’ language(s) and understand their societies’ cultural contexts.
  3. To serve as the nucleus for a network of researchers focusing on the impact of automated decision-making on individuals and society. This network includes journalists specialising in the nascent field of algorithmic accountability reporting, academics from economics, sociology, media studies, law and political sciences, to lawyers working in civil society organisations looking at the human rights implications of these developments. We will attempt to build on this first round of research and extend the network in the coming years because it is crucial to also include the many countries not covered in this initial report.
  4. To distil recommendations from the results of our findings: for policy makers from the EU parliament and Member States' legislators, the EU Commission, national governments, researchers, civil society organisations (advocacy organisations, foundations, labour unions etc.), and the private sector (companies and business associations). You will find these recommendations in the chapter "Recommendations".

The scope of this report

We view this report as an explorative study of automated decision-making both on the EU level and in selected Member States. It contains a wide range of issues and examples that justify a closer look, more in-depth research and discussion.

  • Geography: For the initial edition of this report, we were not in a position to cover all 28 Member States. Instead, we decided to focus on a number of key countries, while making sure that the EU as a whole would be properly represented. Also, we deliberately excluded examples of applications coming from outside the EU. We all know that not only GAFA (Google, Amazon, Facebook and Apple) but also IBM, Microsoft, Salesforce and other US companies have a dominant market share in many sectors and also in European countries. Still, we confined ourselves to companies from Member States to better showcase their important role in automated decision-making.
  • Topic: As laid out in the section “Why automated decision-making instead of Artificial Intelligence?”, the definatory framework on which this report is based is still in an early phase. Therefore, much of the discussion between the contributors centred around the question whether a certain example fits the definition and should be part of the report or not. Most of the time, when there was latitude, we decided to err on the side of including it—because we see value in knowing about borderline cases at a stage when all of us are still trying to make sense of what is happening. When looking at examples framed as ‘AI’, ‘big data’ or ‘digitisation/digitalisation’—like the plethora of AI and big data strategies, politically convened commissions and so forth—we focused on aspects of automated decision-making and decision support whenever they were referenced. In case automation as a term was not used, we still scanned for key concepts like discrimination, justice, equity/equality to take a closer look, since automation processes tend to be hiding behind these keywords.
  • Depth: Contributors to the report had approximately five days to research and complete their investigations. This means that they were not in a position to do any kind of investigative research, file freedom of information requests, or follow up resolutely in case companies or public entities denied their requests for information.

We would like to express our enormous gratitude to those who contributed to this report. As you can imagine, compiling it was a very special venture. It required a lot of expertise, but also spontaneity, flexibility and determination to piece this puzzle together. As always, this also means it required, at times, quite a lot of tolerance and patience when deadlines neared and pressure mounted. We consider ourselves lucky to have found all this in our colleagues. You can learn who they are in more detail in the information boxes underneath the country chapters.

We are also greatly indebted to Becky Hogge and the entire team at OSF’s information programme. When we approached Becky with the idea to produce this report, she was not only immediately willing to take the risk of embarking on this journey but she also saw the application through in almost no time. As a result, we were able to seize this opportunity to provide valuable input at a decisive stage of the debate.

Many shades of grey

In closing, let’s come back to the case of the company offering to create a profile by scanning your email. It seems to be a clear-cut case of a misuse of power. If someone is looking for a job, how can they be in a position to freely give their consent to a procedure the company filling a position wants them to undergo—especially if this is something as private as analysing their personal emails? This seems to be a highly unethical, if not illegal, proposal that we cannot possibly accept.

Now consider the following scenario: The reliability of the tests that recruiters ask candidates to take is highly controversial due to their social desirability bias, meaning that people tend to give answers not in the most truthful manner but in a manner that they think will be viewed favourably by others. The company argues that this bias can be minimised by looking at a corpus of emails that is very hard to manipulate. Imagine that this claim is supported by sound research. In addition, the company argues that candidates’ emails are never permanently stored anywhere, they are only passed through the company’s system in order to apply its analysis procedure to them and then disappear. Also, no human will ever look at any individual email. Now imagine this procedure is audited by a trusted third party—like a data protection authority, an ombudsman, a watchdog organisation—who confirms the company’s claim. Lastly, imagine that all candidates are provided with information about this process and the logic of the underlying profiling model, and then given the choice whether to undergo the ‘traditional’ process of filling out questionnaires, or to allow the company to scan their emails.

Given all these assumptions, would this change your mind regarding the question whether we can approve of this procedure, whether we should legally permit this procedure, and probably even whether you personally might consent to such a procedure? Mind you, we’re not going to give away what our decision is. Instead, we’ll leave you to contemplate the countless shades of grey in a world of automated decision-making and hope that you are inspired by reading this report.

Brigitte Alfter, Ralph Müller-Eiselt, Matthias Spielkamp