Mind The Algorithm

The question of whether automation benefits or damages us citizens is primarily a political one. No one should let themselves be told that only those who have studied mathematics or computer science can take part in the discussion.

Matthias Spielkamp
Executive Director, Co-Founder & Shareholder

This essay was initially published in German in the Austrian newspaper Die Presse on 20 July 2019. The English version will be published in Alpbach Panorama, the magazine of the European Forum Alpbach, where Matthias Spielkamp will participate as a speaker on 26 August 2019. (Translated by Joanna White)

Imagine you’re looking for a job. The company you’re applying to tells you that the application process will be much easier if you give them the username and password to your personal email account. That way, they can just analyse your emails and create a personality profile. No complicated questionnaires to fill out—they’re boring and much less reliable than analysing your emails, since these are harder to manipulate. Everybody wins: the company, because it gets a far more precise application profile and more suitable employees; you, because it’s less work for you to find a job that suits you; and the company offering the analysis, because by doing so it can make a lot of money.

When we came across this example during research for our report “Automating Society”, which took stock of automated decision-making processes in 12 EU countries, at first we were dumbfounded. There must be some mistake, we thought, and the assertion from the analysis company that no applicant had ever declined a request to access their email account wasn’t exactly reassuring. Supposedly, all this is entirely legal, even after the stringent data protection standards introduced by the European Union.

It’s cases like this that make people feel uneasy, fearful even: afraid of being spied on and monitored so that other people can form an opinion of them, yet without having any kind of control over this image any more; afraid of becoming the object of decisions made by systems whose functions they do not understand, cannot understand—partly because they’re so complex, partly because those using them reveal nothing about how they work or the purposes for which they’re being used.

This uneasiness is justified on the one hand but presents a problem on the other. After all, automated processes aren’t bad in and of themselves—on the contrary. Who wants to use paper and pencil instead of a spread sheet to do complex calculations on large sets of numbers? Who wants to manually filter their emails for the countless spam messages? And who wants to go back to catalogues of web pages they have to browse through like the Yellow Pages instead of just entering keywords into the search engine of their choice? These are just some examples of automated processes we use more or less consciously on a daily basis. Added to this come processes without which our society simply wouldn’t function: from the automated data transfer that constitutes the core of the Internet to automated electricity grid management, without which day-to-day life would come to a standstill. It’s not just that we’ve already been living in an automated world for a long time, we also benefit enormously from this automation.

The fact that these processes have been attracting more attention—and scepticism—in recent years lies chiefly in the claims made by providers that they can develop and operate systems to make automated decisions. So called Artificial Intelligence is often mentioned in the same breath. It’s a very unfortunate choice of words. For many people it creates the impression that there are processes at work comparable to human intelligence, but being carried out by machines instead. This isn’t the case. Rather, these are statistical processes which give an impression of producing results in a similar way to human thought processes. Yet just because a computer can beat a human at a game of chess doesn’t mean it has a purpose or possesses creativity or autonomy—all characteristics that make a human “human” in the first place.

In order not to strengthen this false impression, we should instead be talking about systems that make automated or algorithmic (preliminary) decisions: ADM (automated decision-making) systems. Here human decisions are delegated to automated systems by transforming a model for decision-making, which has been developed by a human, into software so it can be implemented by a machine. For example, an automated car is equipped with a system capable of recognising and avoiding obstacles. That it should avoid obstacles has not been decided by the car or its computer system, but by the software developers. This also isn’t altered by the fact that increasingly, so-called “self-learning” systems are being installed which, over time, get better and better at recognising obstacles. The decision to avoid these obstacles rather than driving into them remains a human decision.

What might seem like academic nit-picking has weighty social consequences: the responsibility for the effects of operating these systems always lies with people, not machines. Saying ‘That’s what the computer decided’ to justify an action can be an expression of helplessness but also the result of deliberate (false) representations made by those using it. Yet it’s still always wrong, for which reason we cannot allow people to use this justification to evade their responsibility.

Austria’s Job Service (the Arbeitsmarktservice, or AMS) is the best example here. A statistical model was developed which, on the basis of education, gender, previous employment, age, nationality and other criteria, would work out the chance of an unemployed person finding work again. It divided people into three categories: those with high, medium and low chances of finding a job. In concrete terms this meant that from 2020, people in the middle category would be given more support, while job seekers with high or low chances on the labour market would receive less support.

The justification: resources for labour market policy should be used more efficiently and support for those seeking work should be more targeted. An objective which many would probably support when formulated in this general way. The problem: the weighting given to different factors corresponds to the status quo. For example, women with “care duties” have a worse chance of finding a job than women without these duties or men with children, to whom the question of “care duties” is never put in the first place. If, however, this fact now becomes the basis for a model that helps determine a person’s future, then it reinforces this state of affairs.

PERPETUATING AN EXISTING INJUSTICE

Those who defend this as an objective reflection of reality have misunderstood its inherent social dynamite. The task of politics should be to create a fairer world where everyone can participate on an equal footing. At the AMS, the decision was taken to perpetuate an existing injustice and therefore reinforce it—a decision taken by humans, we might add. The algorithm developed for this purpose is really just a mechanism for implementing this political decision; it is for those who took that decision to justify and take responsibility for it.

It is the blindness to the politics at the heart of many automated processes that provokes justified criticism. For the risks have long been known. When, for example, in 1957 IBM began to develop the Semi-Automated Business Research Environment, SABRE, in order to sell tickets for flights operated by American Airlines, it wouldn’t be at all naïve to assume it had a worthy aim in mind: to improve what had always been a very cumbersome and error-prone reservation process for the airline and its customers. Over the following 25 years however, American Airlines realised that the system might have other uses. When several other airlines also began using SABRE, American Airlines started to manipulate the system so that its flights would be favoured above those of other airlines. Travellers were no longer offered the cheapest ticket but the one that American Airlines wanted to sell. In a US congressional hearing, American Airlines president Robert L. Crandall was unmoved by accusations of having acted unfairly: ‘The preferential display of our flights, and the corresponding increase in our market share, is the competitive raison d’être for having created the system in the first place.’

In their seminal paper “Auditing Algorithms”, Christian Sandvig and his co-authors dubbed this perspective “Crandall’s Complaint”: why should you build and operate an expensive algorithm if you couldn’t bias it in your favour? The US government took a different view to Robert Crandall and in 1984 passed a now little-known regulation, widely regarded as the first legal regulation of algorithms in the world. Entitled “Display of Information”, it requires each airline reservation system to provide to any person upon request the current criteria used by the system to classify flights, the weighting given to each criterion, and the specifications used by the programmers in constructing the algorithm.

It is astonishing to see what has happened in the 35 years since this law was passed. On the one hand, there has been the development of ever better systems for automating complex processes, systems which have and could have a huge effect on all our lives. Three examples: in the Netherlands, government authorities are attempting to identify “welfare fraudsters” by combining data from different agencies and filtering it through an algorithm; in Denmark, the government wants to introduce a system that will take data about families and use it to work out which children are at risk of neglect; in Poland, those out of work have been either granted or denied help since 2014 on the basis of an automated classification system.

On the other hand, regulation has not kept pace with developments. Whereas in 1984 transparency was mandated for a commercial system purely for reasons of consumer protection, i.e. in order to determine whether people were being disadvantaged in their capacity as consumers, the AMS system and the examples above show that we have long been dealing with decisions that encroach on our rights to a far greater extent. Yet all too often we are denied any information whatsoever on who is operating these systems to what end; who has developed them with whom; which political and statistical models they are built on; and who is being affected in what way.

INFORMATION IS BEING DENIED IN MANY COUNTRIES

The algorithm for the Dutch system is kept secret; even after filing a freedom of information request, a network of Dutch civil society organisations received no information from the government. At which point the network instituted legal proceedings, which are still on-going. In Denmark, work by investigative journalists gave rise to mass protests from the population, with the result that the government delayed launching the programme in late 2018 indefinitely. And in Poland, civil society organisations had to resort to legal proceedings to gain further information so they could understand what was going on—and then voice fierce criticism. Yet only when the Polish Federal Audit Office also picked apart the system not only for being inefficient but also for discriminating against certain social groups, including single mothers and people with disabilities, and the Constitutional Court ruled that the legal foundation for data processing was insufficient, did the government decide to discontinue using the system at the end of 2019.

It is a threat to democracy and the rule of law when processes with far-reaching consequences for individuals and society are kept secret. Yet demands for greater transparency are often countered with three arguments. 1: such processes are usually protected as trade secrets by the firms who developed them. 2: it is neither necessary nor sufficient to disclose an algorithm—unnecessary, since its function can also be described without publishing the code and, in any case, what matters is the outcome, not the process; insufficient, since not only the algorithm but the data it works with is a decisive factor and the process can undergo dynamic changes, in particular when self-learning technologies are at play. 3: transparency alone is useless because it results neither in people being no longer at the mercy of a system nor gives them the chance to object or claim restitution, e.g. through the payment of damages.

All this is correct. Therefore we must be clear that transparency alone is not the cure. But it is a vital precondition for using automated systems in a democracy. Only when we know why they’re being used—simply to save money, or is a problem so complex it seems that it can’t be solved any other way?—what their purpose is, who is affected by them in what way, who has developed them—a government authority on its own, or with the help of a commercial company, or the company on its own—and on what assumptions they’re based will we be able to decide. Decide whether we agree that the system be used for the stated purpose—or whether some decisions, for example on imprisonment, should never be left to automation. If we agree in general, we want to be able to decide how much information about the system’s workings we need in order to trust that it’s working in the way it should. And if we deem it appropriate that the code is revealed to government authorities, for example, then a company wishing to sell or operate such a system must submit to this. It can then be examined in what’s known as an “in camera” process by experts who are sworn to secrecy. And when we’re sufficiently familiar with the workings, we have to decide what kind of control and oversight we consider adequate, and what kind of opportunities to object we need, in order to avoid becoming the powerless objects of automated processes.

THE NEED FOR A REGULATORY FRAMEWORK

As a start, these demands form the foundations for regulatory frameworks, which we need to develop in order to operate automated decision-making systems in such a way that they strengthen the common good, and in order to prevent them from undermining it—by excluding or disadvantaging certain groups of people while a small number of companies skim the profit off the top.

And finally: the question of whether automation benefits or damages our societies, benefits or damages us citizens, is chiefly a political question. No one, therefore, should let themselves be told that only those who have studied maths or computer science can take part in the discussion. It’s a discussion for mainstream society. For even if it hadn’t turned out to be the case that the Finnish firm, which claimed to create personality profiles on the basis of private emails, never actually had any customers and simply wanted to profit from the hype around so-called artificial intelligence—as many firms are currently doing—the question of whether what it was offering was legitimate or not was never a question of technology.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.