RECOMMENDATIONS
In this report we have compiled findings from 12 EU member states and the level of the EU. In all countries, we looked at the debates focused on the development and application of automated decision-making systems. We identified regulatory strategies and compiled examples of ADM systems already in use. Based upon these findings, we have the following recommendations for policy makers in the EU parliament and Member States parliaments, the EU Commission, national governments, researchers, civil society organisations (advocacy organisations, foundations, labour unions etc.), and the private sector (companies and business associations). These recommendations are not listed in order of importance and we have refrained from referring to specific examples.
Focus the discussion on the politically relevant aspects. ‘Artificial Intelligence’ is all the rage right now, ranging from debates about relatively simple rule-based analysis procedures to the threat of machine-created ‘super-intelligence’ to humanity. It is crucial to understand what the current challenges to our societies are. ‘Predictive analytics’, used to forecast maintenance issues on production lines for yoghurt, should not concern us too much—except maybe when it relates to the allocation of research grants. However, predictive analytics used for forecasting human behaviour, be it in elections, criminal activity, or nof minors, is an entirely different story. Here, we need to guarantee that these systems are nidentified as crucial for society. They must be democratically controlled by a combination of regulatory tools, oversight mechanisms, and technology.
The debate around AI should be confined to current or imminent developments. There is a lot of discussion around the ideas of ‘artificial general intelligence’, ‘super-intelligence’, ‘singularity’ and the like. As attractive as these discussions may appear to some, right now they are based on mere speculation and therefore they are a distraction from the most pressing question: how to deal with current developments?
Consider automated decision-making systems as a whole, not just the technology. Automated decision-making processes are often framed as technology. As a result, the debate revolves around questions of accuracy, data quality and the like. This risks overlooking many of the crucial aspects of automated decision-making: the decision itself to apply an ADM system for a certain purpose, the way it is developed (i.e. by a public sector entity or a commercial company), and how it is procured and finally deployed. These are all part of the framework that needs to be reflected when discussing the pros and cons of using a specific ADM application. This means that we should ask what data the system uses, whether the use of this data is legal, what decision-making model is applied and whether it has a problematic bias. But we also need to ask why companies or governments come up with the idea to use specific ADM systems in the first place. Is it because of a problem that cannot be addressed in any other way, maybe due to the inherent complexities associated with the problem? Have austerity measures led to a situation where there are not enough resources for humans to deal with certain tasks, so automation is used as an option to save money?
Empower citizens to adapt to new challenges. There are a number of ways we can enhance citizens’ expertise to enable them to better assess the consequences of automated decision-making. We would like to highlight the Finnish example: in order to support the goal of helping Finns understand the challenges ahead, the English-language online course Elements of Artificial Intelligence was developed as a private-public partnership and is now an integral part of the Finnish AI programme. This freely available course teaches citizens about basic concepts and applications of AI and machine learning. Almost 100,000 Finns have enrolled in the course to date, thus increasing public understanding of AI and enabling them to participate in public debate on the subject. On the course, some societal implications of AI are introduced, for example, algorithmic bias and de-anonymisation, to underscore the need for policies and regulations that help society to adapt more easily to the use of AI.
Empower public administration to adapt to new challenges. Public administration not only procures a lot of automated decision-making systems, it also uses them for purposes that have a big impact on individuals and society, i.e. border control, crime prevention and welfare management. Public administration must ensure a high level of expertise inside its own institutions in order to either develop systems themselves, or at least be in a position to meaningfully oversee outsourced development. This can be achieved by creating public research institutions, i.e. in cooperation with universities or public research centres, that can teach and advise civil servants. Such institutions should also be created at the EU level to assist Member States.
Strengthen civil society involvement. Our research shows that, even in some large Member States, there is a lack of civil society engagement and expertise in the field of ADM. This shortcoming can be addressed by a) civil society organisations identifying the consequences of automated decision-making as a relevant policy field in their countries and strategies, b) grant-making organisations earmarking parts of their budget for ADM, developing funding calls, and facilitating networking opportunities, c) governments making public funds available to civil society interventions.
Make sure adequate oversight bodies exist and are up to the task. Oversight over automated decision-making systems is organised by sector. For example, there are different oversight bodies for traffic (automated cars), finance (algorithmic trading), and banking (credit scoring). This makes sense, since many systems need to be looked at in their respective contexts, with specific knowledge. It is doubtful, though, that many of the oversight bodies in place have the expertise to analyse and probe modern automated decision-making systems and their underlying models for risk of bias, undue discrimination, and the like. Here, Member States and the EU are called upon to invest in applied research to enable existing institutions to catch up, or to create new ones where needed. In addition, parliaments and courts need to understand the fact that it is they who need to oversee the use of ADM systems by governments and public administrations. Therefore, they also need appropriate assistance in order to empower them to live up to the task.
Close the gap between Member States. Many countries in the EU have developed strategies for ‘digitisation/digitalisation’, ‘big data’, or ‘Artificial Intelligence’. Still, some countries are lagging behind when it comes to discussing the consequences that automated decisionmaking can have on individuals and society—either because of a lack of resources, or because of differing priorities. Since the question of whether and how ADM systems should be used very much depends on the cultural context, expertise in Member States is needed. Some Member States should invest more in capacity building. The EU is doing a lot in this regard, i.e. by developing its own recommendations via the EU High-Level Expert Group on countries to see whether there is a need to catch up. If that is the case they can use this as an opportunity to create structures and mechanisms that help decision makers and the general public learn from leading countries. On the national level, this could happen in the form of intergovernmental working groups or bi- and multi-national cooperation. At the EU level, the EU Commission should continue to offer forums like the European AI Alliance to further this goal.
Don’t just look at data protection for regulatory ideas. Article 22 of the General Data Protection Regulation (GDPR) has been the focus of many discussions about automated decision-making in recent years. A consensus is starting to develop that a) the reach of Article 22 is rather limited (see page 42) and b) that there are use cases of ADM systems that cannot be regulated by data protection (alone), i.e. predictive policing. These systems can have effects on communities – like over-policing – without the use of personal data, so the GDPR doesn’t even apply. At the same time, since no individual is discriminated against if a neighbourhood slowly turns into a highly patrolled area, anti-discrimination laws are of no help either. There needs to be a discussion about how to develop governance tools for these cases. In addition, stakeholders need to look at creative applications of existing regulatory frameworks, like equal-pay regulation, to address new challenges like algorithmically controlled platform work, also known as the Gig Economy, and explore new avenues for regulating the collective effects of ADM altogether.
Involve a wide range of stakeholders in the development of criteria for good design processes and audits, including civil liberty organisations. In some of the countries we surveyed, governments claim that their strategies involve civil society stakeholders in the current discussion in order to bring diverse voices to the table. However, it is often the case that the term civil society is not well defined. It can be legitimately argued that the term civil society also includes academia, groups of computer scientists or lawyers, think tanks and the like. But if governments use this broad definition to argue that ‘civil society’ is on board, yet civil liberty organisations are not part of the conversation, then very important viewpoints might be missed. As a result, there is a risk that the multi-stakeholder label turns out to be mere window dressing. Therefore, it is critical that organisations focused on rights are included in the debate.