EUROPEAN UNION

By Kristina Penner

The EU is a lot of things: It is a union of 28 Member States that has a commission, a parliament, a council of national ministers, the Court of Justice, and a couple of other institutions. It has many funding programmes for research and development, committees such as the European Economic and Social Committee (EESC), an ombudsman and agencies like one for fundamental rights. In addition, there are scores of business and civil society pressure groups, professional societies and think tanks that try to interact with the EU institutions or influence policy making.

So when we started looking into where automated decision-making (ADM) is currently being discussed in the EU and what regulatory proposals already exist, we had many directions in which to look. Some places were obvious: The EU’s Declaration of cooperation on Artificial Intelligence (AI), the Commission’s Communication Artificial Intelligence for Europe, its High-Level Expert Group focused on AI, the European Parliament’s resolution on robotics and the European Economic and the Social Committee’s opinion on Artificial Intelligence (AI). These all state that the aim of using ADM is to simultaneously maximise the benefit to society, help business, spur innovation and encourage competition.

The term Artificial Intelligence is commonly used in discussions, however upon closer inspection it becomes clear that it is algorithmically controlled ADM systems that are at the centre of many of these debates. The EU’s algo:aware project is a case in point. Its mandate is to analyse „opportunities and challenges emerging where algorithmic decisions have a significant bearing on citizens, where they produce societal or economic effects which need public attention.” 1

This perspective should be welcomed, as it focuses on a key question: How much of our autonomy are we willing to cede to automated systems? And it begs another question: Do we have the appropriate safeguards in place to help us understand what we’re doing and enable us to stay in control of the process?

One of these safeguards, the General Data Protection Regulation (GDPR), has been hailed as one of the most powerful tools the EU has come up with to address automated decision-making. However, many experts now agree that it has shortcomings; others fear that, when it comes to tackling the challenges posed by ADM systems, the GDPR may not be of any help at all. Therefore, it could well be time to look in a different direction to discover other means that can be used to deal with these challenges. EU employment equality law could be one such direction. Alternatively, long-existing regulations, such as the Financial Market Directive with its rules for high frequency algorithmic trading, might serve as a model. Last but not least, civil society and standards bodies have begun to play an active role in shaping the discussion.

Political debates on aspects of automation – European Institutions

Declaration of cooperation on Artificial Intelligence (AI)

On April 10, 2018, twenty-five European countries signed the Declaration of Cooperation on Artificial Intelligence (AI) [EU 1] with the stated goal to build on „the achievements and investments of Europe in AI” [EU 1], as well as progress towards the creation of a Digital Single Market. In the declaration, participating Member States agree to shape a European approach to AI, increase public and private investment, and commit to publish a coordinated plan on AI before the end of 2018. While the declaration focuses on increasing „competitiveness, attractiveness and excellence in R&D in Al”, it also states that the signatories want to foster the „development of AI in a manner that maximizes its benefit to economy and society” and „exchange views on ethical and legal frameworks related to Al in order to ensure responsible Al deployment.” With regard to automated decision-making processes, the signatories commit to „ensure that humans remain at the centre of the development, deployment and decision-making of Al, prevent the harmful creation and use of Al applications, and advance public understanding of Al”. In addition, the accountability of Al systems is supposed to be increased.

Communication on Artificial Intelligence for Europe

Two weeks after the publication of the Declaration of cooperation on Artificial Intelligence, on April 25, 2018, the European Commission published a Communication on Artificial Intelligence for Europe. [EU 2] Following the previously adopted declaration, it elaborates on the three pillars outlined as the core of the proposed strategy:

  1. Being ahead of technological developments and industrial capacity and strengthening public and private actors, including not only investments in research centres, but also the development of an „AI-on-demand platform” to provide data and resources for the creation of a data economy
  2. To increase preparedness for socio-economic changes—by modernising education and training systems—supporting labour market transitions
  3. To develop AI ethics guidelines and to ensure legal clarity in AI based applications (announced for 2019)

Although the Communication does not differentiate between AI and other systems of ADM in some parts of its strategy outline, clear indications of responses to ADM can be found in specific measures and policy packages introduced by the Communication.

Preparing for socio-economic changes

The Communication states that its goal is „to lead the way in developing and using AI for good and for all” and to follow an approach that „benefits people and society as a whole”. The main steps and measures outlined in the initiative focus on competitiveness and investments. It also pledges to encourage diversity and interdisciplinarity („diversity and gender balance in AI jobs”). The Communication explicitly addresses the risk posed by automated decision-making: „Some AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making”. These risks are to be mitigated mainly by AI ethics guidelines (see below in this chapter) and by providing guidance on the interpretation of the Product Liability Directive.

Research and innovation on responsible and explainable AI

„AI systems should be developed in a manner which allows humans to understand (the basis of) their actions. […] Whilst AI clearly generates new opportunities, it also poses challenges and risks, for example in the areas of safety and liability, security (criminal use or attacks), bias and discrimination.” [EU 2]

The Commission announced that it will support (basic and industrial) research and innovation in fields built on the guiding principle of „responsible AI”, including investment and encouragement of research and testing in sectors such as health, security, public administration and justice, with the goal to enable policy makers to gain experience and to devise suitable legal frameworks.

Research on the development of explainable AI systems—and unsupervised machine learning in order to increase transparency and minimise the risk of bias and errors—is only planned beyond 2020.

One starting point is a pilot project 2 commissioned by the EU Commission on algorithmic awareness building. Recognising that algorithms play an increasingly important role in decisions of relevance to public policy, the Commission procured an in-depth analysis into algorithmic transparency. This aims to raise awareness and build an evidence base for the challenges and opportunities of algorithmic decisions. The project’s objective, among others, is to design or prototype solutions for a selection of problems. These include policy responses, technical solutions and private sector and civil society-driven actions in response to the challenges brought by ADM, including bias and discrimination.

Building a European Data Economy and creating a European Artificial­ ­Intelligence-on-demand-platform

Already embedded in the Digital Single Market strategy—translated into proposals for action in its mid-term review, and now applied to AI—the EU Commission is striving to establish and continue the expansion of a „common European data space”. [EU 4] Identifying data as a key ingredient for competitive AI, the EU plans „a seamless digital area with the scale to enable the development of new products and services based on data”. This is to be realised, in particular, by increasing the accessibility and re-usability of public sector information, the proposal to amend the Public Sector Information (PSI) Directive being a core element. [EU 5] The second comprehensive initiative in this regard is the creation of a European AI-on-demand platform aiming to strengthen a European AI community, which is already taking shape as AI4EU [EU 6] (see below).

The European Commission aims to facilitate the re-use of PSI such as legal, traffic, meteorological, economic and financial data throughout the European Union. This will be done by harmonising the basic conditions that make PSI available to re-users, to enhance the development of Community products and services based on PSI, and to avoid distortions in competition. Stakeholder concerns are especially related to the necessary protection of personal data, especially in sensitive sectors such as health, when they take the decision on the re-use of PSI (see paragraph on the EDPS).

The development of ethics guidelines

The Communication — and the proposed implementation process behind the EU Initiative on AI — addresses ethical and legal questions. It states that the basis for work on these questions will be the EU’s values laid down in the Charter of Fundamental Rights of the EU.

By the summer of 2019, the Communication foresees the creation of a framework for all relevant stakeholders and experts — see European AI Alliance and High-Level Expert Group in this chapter — who will draft the guidelines, addressing issues such as future of work, fairness, safety, security, social inclusion and algorithmic transparency. More broadly, the group will look at the impact on fundamental rights, including privacy, dignity, consumer protection and non-discrimination.

The development of the AI Ethical Guidelines will build on the Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems by the European Group on Ethics in Science and New Technologies (EGE) [EU 7] 3. They will tackle questions on „liability or potentially biased decision-making”. It affirms that the EU must develop and apply a framework to protect ethical principles such as accountability and transparency, to reach the indicated goal to become „the champion of an approach to AI that benefits people and society as a whole”. [EU 2]

The development of the draft is accompanied by assessments from the EU Fundamental Rights Agency (FRA, see below), and the evaluations of the EU Safety framework.

The Commission is planning to systematically monitor AI-related developments, including policy initiatives in Member States in order to develop an AI index to inform the discussion.

By announcing work on a coordinated plan with Member States by the end of the year—also strongly focusing on competition and economic aspects—there is a risk that many of these newly created bodies and consultation processes will be insufficiently incorporated into the implementation framework.

The following statements and resolutions are also referred to or announced in the Communication, but not included in this report:

European Economic and Social Committee opinion on AI

The European Economic and Social Committee (EESC) is a consultative body of the European Union and an advisory assembly (Article 300 TFEU). It is composed of „social partners” and economic and social interest groups—namely: employers/employers’ organisations, employees/trade unions and representatives of various other interests. [EU 10]4

In May 2017, the EESC adopted a so-called „own-initiative opinion” on the „consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society” that was taken into account in the EU Communication on AI. In comparison to other papers and communiqués on the EU AI initiative, it presents a very precise, specific and in-depth analysis of different types and subfields of AI (narrow/general AI) and its parameters and consequences. The committee provides general recommendations, including a human-in-command approach for „responsible AI”. It identifies eleven areas where AI poses societal and complex policy challenges, namely: ethics, safety, privacy, transparency and accountability, work, education and skills, (in-)equality and inclusiveness, law and regulation, governance and democracy, warfare and super-intelligence. [EU 11]

In the paragraph on „Transparency, comprehensibility, monitorability and accountability”, the document deals with actions and decisions of AI systems (through algorithms) in people’s lives. The EESC argues that the acceptance, sustainable development and application of AI is based on the ability to understand, monitor and certify the operation, actions and decisions of AI systems, including retrospectively. The committee therefore advocates for transparent, comprehensible and monitorable AI systems whose operations have to be accountable, including retrospectively. In addition, the EESC demands that recommendations be made to determine which decision-making procedures can and cannot be transferred to AI systems, and when human intervention is desirable or mandatory.

The opinion offers a critical and substantial analysis of ethical questions, like embedded bias in AI development. It highlights the responsibility of humans to ensure that accuracy, data quality, diversity and self-reflection are taken into account in the design of AI, and to reflect on the environment in which it is applied. The EESC calls for a „code of ethics for the development, application and use of AI so that throughout their entire operational process AI systems remain compatible with the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights”. The committee also asks for the „development of a standardisation system for verifying, validating and monitoring AI systems” on a supra-national level.

EP Report on Civil Law Rules on Robotics / EP Committee on Robotics

In January 2017, the European Parliament adopted a report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103/(INL) [EU 12]). The report covers a wide range of different areas, such as liability rules, ethical principles, standardisation and safety, data protection, human enhancement or education and employment. IT also provides a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees.

The European Commission was asked by the Committee on Robotics to use its findings and recommendations as guidelines for the implementation of the EU AI strategy. This included the creation of a legal framework to address the use of AI and robotics for civil use. It specifically mentions that the „further development and increased use of automated and algorithmic decision-making undoubtedly has an impact on the choices that a private person (such as a business or an internet user) and an administrative, judicial or other public authority take in rendering their final decision of a consumer, business or authoritative nature”, therefore „safeguards and the possibility of human control and verification need to be built into the process of automated and algorithmic decision-making”, including „the right to obtain an explanation of a decision based on automated processing”. [EU 12]

In this context, a public consultation with an emphasis on civil law rules was conducted by the Parliament’s Committee on Legal Affairs (JURI) to seek views on how to address the challenging ethical, economic, legal and social issues related to robotics and AI developments. [EU 13]

The consultation did not contain questions about automated or algorithmic decision-making, neither in the general public nor in the expert version of the questionnaire. However, some submissions explicitly referred to „algorithmic discrimination“ and „transparency of algorithmic decision-making“. In addition, there was a mention that fundamental rights „would be at risk if unethical practice is facilitated by virtue of algorithms focused on commercial gain, for example because humans ‘allow’ or ‘rely’ on robot sorting techniques that are discriminatory and may be unfair and undermine dignity and justice”. Another respondent wrote: „While the EU has started to address and regulate in the EU General Data Protection Regulation, the use of automated individual decision-making systems, such as profiling, further work is needed to fully understand the functioning of these systems and develop adequate safeguards to protect human rights and dignity.”

A highly controversial proposal appears in paragraph 59f of the resolution. It calls for the creation of „a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”. A group of around 285 „political leaders, AI/robotics researchers and industry leaders, physical and mental health specialists, law and ethics experts” signed an open letter on „Artificial Intelligence and Robotics” [EU 14], criticising the idea as misguided from a technical, ethical and legal perspective. Both the EESC’s opinion (see above) in its article 3.33 [EU 15] and UNESCO’s COMEST report on Robotics Ethics of 2017 in its article 201 [EU 16] state that they were opposed to any form of legal status for robots or AI.

European Union Agency for Fundamental Rights

The European Union Agency for Fundamental Rights (FRA [EU 17]) is the EU’s centre of fundamental rights expertise. Established in 2007 and based in Vienna, it is one of the EU’s decentralised agencies set up to provide expert advice to EU institutions and Member States. The FRA is an independent EU body, funded by the Union’s budget, and is a member of the EU High-Level Expert Group on AI.

Stating that the intersection of rights and technological developments warrants a closer analysis, the FRA actively examines two aspects of automated—or data-driven, as they call it—decision-making: Its effects on fundamental rights and the potential for discrimination in using big data for ADM. In 2018, the FRA started a new project on „Artificial Intelligence, Big Data and Fundamental Rights”, with the aim of contributing to the creation of guidelines and recommendations in these fields. [EU 18] In a focus paper they point out that „When algorithms are used for decision-making, there is potential for discrimination against individuals. The principle of non discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union (EU), needs to be taken into account when applying algorithms to everyday life.” [EU 19] It also suggests potential ways of minimising this risk, like a strong inter-disciplinary approach: „Although data protection principles provide some guidance on the use of algorithms, there is more that needs to be considered. This requires strong collaboration between statisticians, lawyers, social scientists, computer scientists and subject area experts. In this way, a truly fundamental rights-compliant approach can be developed.”

The project is collecting data on the fundamental rights implications and opportunities related to AI and Big Data in order to support development and implementation of policies.

The agency will further analyse the feasibility of carrying out online experiments and simulation case studies using modern data analysis tools and techniques.

In December 2018, the FRA published an update [EU 20] of their „guide on preventing unlawful profiling” from 2010. The update takes into account legal and technological developments and the increased use of profiling by law enforcement authorities. The guide also widened its scope to include border management, and it offered „practical guidance on how to avoid unlawful profiling in police and border management operations.”

AI4EU – European AI-on-demand platform

The main elements of the AI4EU project [EU 22] 5 are the creation of a European AI-on-demand platform and the strengthening of the „European AI community”. Other areas of action are called „Society and European Values”, „Business and Economy”, and „AI Research and Innovation”. The platform is supposed to „act as a broker, developer and one-stop shop providing and showcasing services, expertise, algorithms, software frameworks, development tools, components, modules, data, computing resources, prototyping functions and access to funding.” Among a long list of activities to reach this goal is the development of a „Strategic Research Innovation Agenda for Europe”, including ethical, legal and socio-economic aspects.

With regard to ADM, it remains to be seen what the establishment of the AI4EU Ethical Observatory will look like. Its mandate is to ensure the respect of human centred AI values and European regulations.

High-Level Expert Group on Artificial Intelligence

To support the implementation of the European strategy on AI, the Commission appointed 52 experts, including representatives from academia, civil society and industry to the High-Level Expert Group on AI (AI HLEG). [EU 23]

The group is tasked to prepare ethics guidelines6 that will build on the work of the European Group on Ethics in Science and New Technologies, and of the European Union Agency for Fundamental Rights (see both in this chapter), published as the group’s first deliverable in December2018. The guidelines will cover issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact of AI and automated decision-making on the application of the Charter of Fundamental Rights, including: privacy and personal data protection, dignity, consumer protection and non-discrimination.

The group is further mandated to support the Commission through recommendations regarding the next steps on how to address mid-term and long-term opportunities and challenges related to Artificial Intelligence. The AI HLEG’s recommendations will feed into the policy development process, the legislative evaluation process, and the development of a next-generation digital strategy.

Overall, the AI HLEG serves as the steering group for the European AI Alliance’s work (see below), and it interacts with other initiatives, helps stimulate a multi-stakeholder dialogue, gathers participants’ views and reflects them in its analysis and reports. With these results it supports the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance.

European AI Alliance

The European AI Alliance is hosted and facilitated by the EU Commission. [EU 24] Its members, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies, are supposed to analyse the emerging challenges of AI, interact with the experts of the High-Level Expert Group on Artificial Intelligence (AI HLEG) in a forum-style setting, and to feed into the stakeholder dialogue.[EU 25] By signing up to the Alliance, members get access to a platform where they can offer input and feedback to the AI HLEG. [EU 26] The AI HLEG drew on this input when preparing its draft AI ethics guidelines and completing its other work. Moreover, the discussions hosted on the platform are supposed to directly contribute to the European debate on AI and to feed into the European Commission’s policy-making in this field.

„[…] Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential.” [EU 25]

The Alliance is also a place to share best practices, network and encourage activities related to the development of AI and is open to anyone who would like to participate in the debate on AI in Europe.

Political debates on aspects of automation – Civil Society

AccessNow

AccessNow is an international non-profit group focusing on human rights, public policy and advocacy. [EU 27] In their new report on Human Rights in the Age of Artificial Intelligence [EU 28], they contribute to the analysis of AI and ADM and conceptualise its risks from a human rights perspective.

As human rights are universal and binding and more clearly defined than ethical principles, AccessNow assesses how human rights complement existing ethics initiatives, as „human rights can provide well-developed frameworks for accountability and remedy.”

In their report, they examine how current and near-future uses of AI could implicate and interfere with human rights. They emphasise that the scale at which AI can identify, classify, and discriminate among people magnifies the potential for human rights abuses in both reach and scope. The paper explores how AI-related human rights disproportionately harm marginalised populations. The paper also develops recommendations on how to address AI-related human rights harm.

BEUC – The European Consumer Organisation

BEUC (Bureau Européen des Unions de Consommateurs) is a Brussels umbrella group for European consumer protection organisations. It represents consumer organisations and defends interests of consumers at the EU level. BEUC investigates EU decisions and developments likely to affect consumers, in areas including AI, the Digital Single Market, the digitalisation of finance, online platforms, privacy and data protection. [EU 29]

In their position paper Automated Decision Making and Artificial Intelligence—A Consumer Perspective from June 2018 [EU 30], BEUC explicitly analyses the increased use of automated decision-making based on algorithms for commercial transactions and its impact on the functionality of consumer markets and societies. It calls for products to be law-compliant by default and that „risks, such as discrimination, loss of privacy and autonomy, lack of transparency, and enforcement failure are avoided.”

Looking into the danger of discrimination, it points out that consumers are categorised and profiled with an increasing degree of precision. „The risk of discrimination, intended or unintended, resulting from data input that is not relevant enough to reach a correct conclusion, persists. The user may be deprived of a service or denied access to information, implying severe societal implications.” This categorisation leads to the different treatment of each user, either in prices they receive, or deals and services they are offered, based on the association of the customer with a certain group. The report addresses the questions of how ADM processes can be audited, and what appropriate measures for correcting errors could look like.

Commenting on the EU Communication on AI, BEUC criticises that there is no intent to update consumer rights laws with transparency obligations. Such obligations would help to ensure that consumers are informed when using AI-based products and services, particularly about the functioning of the algorithms involved, and their rights to object to automated decisions. On the other hand, BEUC applauds the EU Commission’s plan to improve consumer access to their own health data.

CLAIRE

CLAIRE (Confederation of laboratories for Artificial Intelligence research in Europe) is an initiative from the European AI community and was publicly launched on June 18, 2018 with a document signed by 600 senior researchers and other stakeholders in Artificial Intelligence. [EU 31] It seeks to strengthen European excellence in AI research and innovation, and proposes the establishment of a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe, aiming for a „brand recognition” similar to the European Organisation for Nuclear Research (CERN). Part of CLAIRE’s vision is to „focus on trustworthy AI that augments human intelligence rather than replacing it, and that thus benefits the people of Europe.”

European Association for Artificial Intelligence

The European Association for Artificial Intelligence EurAI (formerly ECCAI) [EU 32] was established in July 1982, as a representative body for the European Artificial Intelligence community. Its aim is to promote the study, research and application of Artificial Intelligence in Europe. The EurAI offers courses, awards and grants for research and dissertations, publishes the journal „AI Communications”, and it co-organises the ECAI conference series. In addition, it also participates in the development of recommendations to EU institutions. For example, in January 2018 the European Commission, in collaboration with EurAI, organised a workshop on the European AI landscape. It considered academic, industry, and government Al initiatives, with the aim to share information and strategies for AI across Europe. All countries in this report have member societies in the EurAI.

When asked whether there is currently an ethically correct AI, the president of EurAI, Barry O’Sullivan, had a clear answer: „‘No.’ […] In principle, the morality of artificial intelligence is a matter for negotiation, for ‘there is no consensus on ethical principles—they depend on social norms, which in turn are shaped
by cultural values’, O’Sullivan added”. [EU 33]

European Digital Rights (EDRi)

European Digital Rights (EDRi) is an association of and a platform for civil and human rights organisations from across Europe. [EU 34] EDRi’s central objective is to promote, protect and uphold civil and human rights in the digital environment. The organisation’s goals are to provide policy makers with expert analysis of digital rights issues, foster agenda setting, and coordinate actions between the national and European level in order to ensure that civil society interests and perspectives are included in debates and policy making. EDRi’s key priorities for the next years are privacy, surveillance, net neutrality, and copyright reform. The perspective that information technology has a revolutionary impact on our society is the common thread in their campaigns. Dealing with new regulatory measures, EDRi published „A Digestible Guide to Individual’s Rights under GDPR”.

euRobotics

euRobotics AISBL (Association Internationale Sans But Lucratif) [EU 35] is a Brussels based non-profit association for stakeholders in European robotics. It was formed to engage from the private side in a contractual public-private partnership, called SPARC, with the European Union as the public side. One of the association’s main missions is to collaborate with the European Commission to develop and implement a strategy and a roadmap for research, technological development and innovation in robotics. The „ethical, legal and socio-economic issues in robotics” working group published a first position paper in early 2017. [EU 36]

Regulatory and self-regulatory measures

EU General Data Protection Regulation (GDPR) and ADM

„(1) The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

(2) Paragraph 1 shall not apply if the decision: …”—Art. 22 (1), GDPR

It is debated whether the GDPR [EU 37] offers the „potential to limit or offer protection against increasingly sophisticated means of processing data, in particular with regard to profiling and automated decision-making”. [EU 38] But while the GDPR includes a definition of the concept of profiling, mentions ADM explicitly (e.g. in Art. 4, 20, and 22), generally supports the protection of individual interests („protection of interests of the data subjects”, obligations to risk assessment in Art. 35), and widens the rights of „data subjects” impacted by „solely automated” ADM with „legal” or „similarly significant” impact, it remains unclear under what circumstances these protections apply. [EU 37]

More precisely, the GDPR defines three criteria as conditions in order for the right of Art. 22 to be applied to ADM: (1) Only when decision-making is fully automated, (2) when it is based on personal data, and (3) when decisions have significant legal consequences or similar effects on the data subject.

It remains a matter of controversy among experts regarding what the GDPR defines as a „decision” or what circumstances and which „legal effects” have to occur for the prohibition to apply. It further does not reflect the diversity of ADM systems already implemented, including various scenarios in which people are involved, who consciously or unconsciously implement ADM or follow the recommendations unquestioningly. [EU 39]

Therefore, critics and rights advocates are questioning the scope and effectiveness of the application of Art. 22 to ADM. [EU 40] They see little room for manoeuvre when it comes to explicit, well-defined and effectual rights, especially against group-related and societal risks and the impact of automated decision-making systems. Also, concerning its interpretation and application to ADM, it lacks authoritative guidance. [EU 39]

EDRi criticises the dilution of the right not to be subjected to automated decisions in Art. 20 of the GDPR leading to a lack of safeguards against the negative effects of profiling on data subjects’ privacy, among others. [EU 41] Nevertheless, Privacy International points out in their report „Data Is Power: Profiling and Automated Decision-Making in GDPR” that for the first time in EU data protection law the concept of profiling is explicitly defined (Art. 4(4)). It is referred to as „the automated processing of data (personal and not) to derive, infer, predict or evaluate information about an individual (or group), in particular to analyse or predict an individual’s identity, their attributes, interests or behaviour”. [EU 38]

Other aspects discussed in regard to the „exceptionally” permissible ADM under the GDPR are transparency and
information obligations and the role of data controllers. [EU 40] After the approval of the GDPR in 2016, researchers and experts claimed that at least a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR, while others are very critical about it and see strong limitations. [EU 39] [EU 40]

Taking into account the previously outlined criteria for the GDPR to apply to ADM, critics doubt the legal effect
and the feasibility of such a right because of a lack of precise language and of explicit and well-defined rights and safeguards against automated decision-making. [EU 39] Moreover, these obligations—including systemic and procedural provisions and regulatory tools given to data controllers and data protection authorities, e.g. to carry out impact assessments and data protection audits—focus on the protection of individual rights and freedoms.8
The GDPR transparency rules do not include mechanisms for an external deep look into the ADM systems necessary to protect group-related and societal interests such as non-discrimination, participation or pluralism. [EU 39]

This means that there is a high probability that the GDPR’s provisions specific to ADM only apply in the rarest of
cases; systems preparing human decisions and giving recommendations may still be used. But there is a lively debate about what the impact of the regulation on the development of ADM will look like. Complementary approaches to strengthen measures within the GDPR and alternative regulatory tools are discussed to rectify already implemented ADM systems, e.g. by using regulation already in place (consumer protection law, competition law, and media law). [EU 40]

The European Data Protection Board (EDPB) [EU 42], which replaced the Article 29 Working Party (WP29) on the Protection of Individuals with regard to the Processing of Personal Data, is responsible for the development and publication of ‘guidelines, recommendations and best practices’ on the GDPR from the EU side. [EU 42]

The European data protection bodies’9 Guidelines on Automated individual decision-making and Profiling for the purpose of Regulation 2016/679 (developed by WP29, endorsed by the EDPB, last revised and adopted in February 2018) suggest not only more comprehensive definitions of both concepts and their possible implications, but also general and specific provisions. It provides examples for conditions necessary for the protection to apply. According to those guidelines, a „legal or similarly significant impact” is manifest in the cancellation of a contract, denial of social benefits, access to education, eligibility for credit, or if it leads to the exclusion or the discrimination of individuals and affects the choices available to the subject. [EU 43]

Police Directive

The EU data protection reform package of 2016, which included the GDPR, also involved a directive on data protection in the area of police and justice [EU 44] as a lex specialis, adopted on May 5, 2016, applicable as of May 6, 2018. Where, however, the GDPR is directly applicable as regulation, the directive 10 had first to be transposed into national law, allowing more space for variations at the national level. It is also intended to facilitate cross-border cooperation and regulate the exchange of data, e.g. with Interpol.

The directive regulates the processing of personal data by criminal law enforcement authorities and further agencies responsible for the prevention, investigation, detection and prosecution of criminal offences. It is intended to ensure, in particular, that the personal data of victims, witnesses, and suspects of crime are duly protected.

The directive applies key principles and provisions of the GDPR to European criminal law enforcement authorities, though with adjusted requirements and specified exceptions. Law enforcement data protection officers, for example, are to be designated and must face the same responsibilities as others with this position under the GDPR. On the other hand, a more complex requirement is the differentiation of personal data based on facts from those based on assessments (Art. 7).

Regarding automated individual decision-making, it states in Art. 11:

  1. „Member States shall provide for a decision based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her, to be prohibited unless authorised by Union or Member State law to which the controller is subject and which provides appropriate safeguards for the rights and freedoms of the data subject, at least the right to obtain human intervention on the part of the controller.
  2. Decisions referred to in paragraph 1 of this Article shall not be based on special categories of personal data referred to in Article 10, unless suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
  3. Profiling that results in discrimination against natural persons on the basis of special categories of personal data referred to in Article 10 shall be prohibited, in accordance with Union law.” [EU 44]

One concern is the applicability of the Police Directive to public–private partnerships. Taking into account the complexity and legitimacy of these forms of cooperation, e.g. when outsourcing the technological implementation of measures for combatting cybercrime and law enforcement data processing, it is not clear if these structures are subject to the data protection-related regimes of the GDPR and the Police Directive. [EU 45]

Strengthening trust and security

The European Commission’s initiatives to improve online security, trust and inclusion moreover comprise (1) the ePrivacy Regulation [EU 46] [EU 47] (2) the Cybersecurity Act [EU 48]11 and (3) the Safer Internet Programme. [EU 49]

EU Data Protection Bodies

European Data Protection Supervisor

In addition to the European Data Protection Board (see above, under GDPR), the European data protection bodies include the European Data Protection Supervisor (EDPS). The EDPS, as an independent supervisory authority, has the responsibility to monitor the processing of personal data by EU institutions and bodies, advise on policies and legislation that affect privacy, and cooperate with similar authorities either at the national or international level to ensure consistent data. [EU 50]

The Supervisor and Assistant Supervisor were appointed in December 2014 with the specific remit of being more constructive and proactive. In March 2015, they published a five-year strategy [EU 51] setting out how they intended to implement this remit, and to be accountable for doing so. In the strategy they first recognise:

„Big data challenges regulators and independent authorities to ensure that our principles on profiling, identifiability, data quality, purpose limitation, and data minimisation and retention periods are effectively applied in practice.

Big data that deals with large volumes of personal information implies greater accountability towards the individuals whose data are being processed. People want to understand how algorithms can create correlations and assumptions about them, and how their combined personal information can turn into intrusive predictions about their behaviour.“ [EU 51]

The EDPS’ action plan to tackle these and more challenges include the following:

Their objective and mandate is to ensure that data protection is integrated into proposals for legislation that affects privacy and personal data protection in the EU. The European Commission consults the EDPS before it adopts a proposal for new legislation that is likely to have an impact on individuals’ right to the protection of their personal data. The EDPS also provides advice on EU initiatives that are not legally binding so-called EU soft law instruments. It issues comments and opinions related to proposals for legislation that are addressed to all three EU institutions involved in the legislative process. In addition, it publishes recommendations or comments on their own initiative, when appropriate, and when there is a matter of particular significance. [EU 52]

The EDPS may also intervene before the EU courts either at the Court’s invitation or on behalf of one of the parties in a case to offer data protection expertise. Moreover, the EDPS monitors new technologies or other societal changes that may have an impact on data protection.

On July 10, 2018, the EDPS issued an Opinion on the European Commission’s Proposal for a new directive on the re-use of Public Sector Information (PSI). It provides specific recommendations on how to clarify the relation and coherence of the PSI Directive with the exceptions outlined in the GDPR. [EU 53]

In its opinion on the proposal, the EDPS points out the relevance of the revision of the PSI Directive as part of the EU vision on „Good Big Data” and emphasises how the smart use of data, including its processing via Artificial Intelligence, can have a transformative effect on all sectors of the economy. At the same time, the EDPS demands that the legislator better address stakeholder concerns related to the necessary protection of personal data, especially in sensitive sectors such as health, when they take the decision on the re-use of PSI (for example, clarifying the risks of re-identification of anonymised data and the safeguards against those risks).

In that context, the EDPS recalls the data protection-relevance of the key principles that, according to the European Commission, should be respected in the context of data re-use, namely (i) minimised data lock-in and ensureing undistorted competition; (ii) transparency and societal participation on the purpose of the reuse vis-à-vis the citizens/data subjects as well as transparency and clear purpose definition between the licensor and the licensees; (iii) data protection impact assessments and appropriate data protection safeguards for reuse (according to a ‘do no harm’—under the data protection viewpoint—principle).

Additionally, it provides for further recommendations on anonymisation and its relation to costs and data protection. It also focuses on a data protection impact assessment (DPIAs) for sensitive sectors, such as healthcare, while taking into account an ‘acceptable re-use policy’.

Some of the EDPS's output relevant to ADM includes:

EU employment equality law

On the basis of three directives (2000/43/EC, 2000/78/EC, 2002/73/EC) and various court decisions especially dealing with Art. 157 of the Treaty on the Functioning of the European Union (TFEU), a general framework for equal treatment in employment and occupation has been established in the EU. This framework could be applied to protect workers in a situation where work-related decisions are not taken by a human being, but (semi-)automatically by an algorithm, since this is a potential source of discrimination. The framework doesn’t just include formally employed persons, as many national regulations do, but has a wider approach, so it may also be applicable to platform workers as well. In addition, the recent Egenberger decision of the ECJ established that anti-discrimination applies to all aspects laid down in Article 21 of the Charter of Fundamental Rights of the European Union, namely sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age, or sexual orientation. Since this list is not conclusive („Any discrimination based on any ground such as …”), there is room for other grounds of discrimination that can be adressed.

Related documents:

Financial market directive – MiFID 2 and MiFIR

In June 2014, the European Commission adopted new rules revising the Markets in Financial Instruments Directive (2004/39/EC). The MiFID framework has been applicable since January 3, 2018. These new rules consist of a directive (MiFID 2) and a regulation (MiFIR). [EU 63] MiFID 2 aims to reinforce the rules on securities markets by, among other things, introducing rules governing high frequency algorithmic trading (HFAT).12

The new legislative framework is supposed to „ensure fairer, safer and more efficient markets and facilitate greater transparency for all participants” [EU 65]. The protection of investors is strengthened through the introduction of new (reporting) requirements, tests and product governance, and independent investment advice, as well as the improvement of requirements in several areas. These include the responsibility of management bodies, inducements, information and reporting to clients, cross-selling, remuneration of staff, and best execution.

HFAT investment firms and trading venues are facing a set of organisational requirements, e.g. to store time-sequenced records of their algorithmic trading systems and trading algorithms for at least five years. To enable monitoring by Member Statecompetent authorities, the European Securities and Markets Authority (ESMA) proposes that the records should „include information such as details of the person in charge of each algorithm, a description of the nature of each decision or execution algorithm and the key compliance and risk controls.” [EU 64]

Further provisions regulating the non-discriminatory access to trading venues, central counterparties (CCPs) and benchmarks are designed to increase competition.

European Standardisation Organisations (CEN, CENELEC, ETSI)

The three European Standardisation Organisations13, CEN [EU 67], CENELEC [EU 68] and ETSI [EU 69] are officially recognised as competent in the area of voluntary technical standardisation. CEN, CENELEC and ETSI enable businesses to comply with relevant directives and regulations through the development of Harmonised Standards (HS) which are published in the Official Journal of the European Union (OJEU).

As contributors to the development of the EU Digital Single Market, the European Standardisation Organisations are tackling fields of work such as additive manufacturing, blockchain, artificial intelligence and cybersecurity.

Connected and Automated Mobility – new rules on autonomous driving

Following the previous ‘Europe on the Move’ initiative of May 2017, the European Commission in 2018 announced a strategy for automated and connected transportation in Europe, the so-called 3rd Mobility Package. [EU 70]

The EU is planning [EU 71] to adopt a new policy recommendation by the end of 2018 / beginning of 2019, setting out the legal framework for the communication between autonomous vehicles and road infrastructure. This is to be achieved by means of so-called „Cooperative Intelligent Transport Systems” (C-ITS). These C-ITS would be installed as boxes in vehicles, cars and roads, and connected to traffic control centres. In particular, the EU turned its attention to Japan, where C-ITS has been operating successfully since 2016.

The regulation is supposed to ensure that drivers of vehicles equipped with this technology will be informed of „13 dangerous situations in stationary and moving traffic”. Vehicle movements are to be coordinated in order to, for example, trigger braking manoeuvres and „drastically reduce” frequently occurring accident sequences. In the second phase, further infrastructure is to be integrated into ‘intelligent’ data exchange, like charging stations, parking spaces and park and ride areas.

By mid-2019, a „100% guaranteed reliability of the warning notices” is intended to create more legal certainty for automobile manufacturers, road operators and companies in the telecommunications sector. The standards developed include specifications and safeguards for data protection and cyber security.

After about four weeks of consultation with Member States and manufacturers specialising in autonomous driving [EU 72] [EU 73], the Council and the European Parliament still have to approve the regulation, which is expected to be operational by mid 2019.

ADM in Action

DANTE anti-terrorism project

DANTE („Detecting and analysing terrorist-related online contents and financing ­activities”) is an experimental project, funded by the European Commission within the Horizon2020 programme, and aimed at using automated decision-making against terrorism. [EU 74] Eighteen EU countries are involved. DANTE is described as a „framework” that supplies „innovative knowledge mining, information fusion, and automated reasoning techniques and services” for the discovery and analysis of terrorist networks by law enforcement and intelligence agencies. It includes „automated functionalities” as wide-ranging as: „detection and monitoring of sources of relevant terrorist-related data in surface/deep Web, and dark nets; accurate and fast detection, analysis, categorisation of suspect terrorist related multi-language contents; large-scale temporal analysis of terrorism trends; real-time summarisation of multilingual and multimedia terrorist-related content; detection of disinformation in online content; detection and monitoring of relevant individuals and linking pseudonyms with the original authors; accurate and fast identification of terrorist online communities and groups; capturing, storing and preserving relevant data for further forensic analysis”. Results are yet to be published.

In the meantime, it was announced that DANTE is collaborating with another Horizon 2020 project, TENSOR.[EU 75] Together, they aim to develop a platform offering Law Enforcement Agencies planning and prevention applications for early detection of terrorist activities, radicalisation and recruitment by the means outlined above. TENSOR is said to have „a work stream dedicated to the ethical, legal and societal impact”.

EU Border Regime & interoperability of EU information systems

In recent years, the European Union has been proposing and adopting mechanisms and initiatives to establish an „integrated smart border management” system. [EU 76] At the same time, it has launched a process towards the interoperability of (existing and future) large-scale EU information systems. [EU 77] This is aimed at integrating instruments for data processing and decision-making systems in the fields of asylum and immigration, border management, and law enforcement cooperation. These developments represent the gradual moving away from a „country-centric” approach towards a „person-centric” approach . [EU 78] Though strongly criticised by civil society and data protection bodies (see below in this paragraph), and accompanied by the request for technological reviews [EU 79], the implementation of an overarching interoperable smart border management system is on its way.

eu-LISA, the „European Agency for the Operational Management of large-scale IT Systems in the Area of Freedom, Security and Justice”, is now managing the „strengthened” databases and applications VIS, SISII and EURODAC together. [EU 80] This is leading to the creation of a „biometric core data system” [EU 81], with three more systems under construction or currently being discussed:

eu-LISA is the agency mandated to implement „technical upgrades” of these IT systems. It runs pilot projects and training, e.g. on the use of identification and verification technologies for border control.

Reflecting on the interoperability of EU information systems to freedom, security and justice the European Data Protection Supervisor stresses „that interoperability is not primarily a technical choice, it is first and foremost a political choice to be made, with significant legal and societal implications in the years to come”. It sees a „clear trend to mix distinct EU law and policy objectives”. [EU 84] It follows the criticism of MEP Marie-Christine Vergiat who claimed that a „presumption of irregularity” underlining this system replaces the assumption of innocence. [EU 85] Michael O’Flaherty, the director of the European Union Agency for Fundamental Rights, addressed the High-Level Expert Group on Information Systems and Interoperability and scrutinised the effect of „flagged” hits—this is the only knowledge that an entry for a specific person exists in a specific database, for example in the European Criminal Records Information System (ECRIS-TCN)—and if a „flagged” hit may further influence decisions taken about an individual. O’Flaherty also underlines that „there is the risk of discriminatory profiling. The data contained in IT systems can be used for risk assessment or profiling. Risk assessment or profiling is not a violation of fundamental rights, but discriminatory profiling is. The chance of this taking place increases if IT systems are interoperable, as several data categories revealing sensitive data such as, race, ethnicity, health, sexual orientation, and religious beliefs can then be accessed simultaneously for profiling purposes.” [EU 86]

With this in mind, see the paragraph on iBorderCtrl, a project on an „Intelligent Portable Border Control System” below.

iBorderCtrl – „Intelligent Portable Border Control System”

iBorderCtrl [EU 87]15 is a system designed to screen non-EU nationals at EU borders. It is supposed to determine whether a person is telling the truth in an automated interview with a virtual border guard, conducted before the person arrives at the border, i.e. from home. If they pass this screening process, they can pass the border without further investigation. If there is suspicion that a person is lying, then biometric information is taken at the border control point, such as fingerprints and palm vein images, and the person is passed on to a human agent who will review the information and make an assessment. iBorderCtrl is currently being tested in Hungary, Greece, and Latvia, where those countries have borders with non-EU countries. [EU 88]

In the first phase—pre-screening with automated checks—third-country nationals register online, upload pictures of their passport, visa, and proof of funds, then they use a webcam to answer questions from a computer-animated border guard avatar. This is described as „a short, automated, non-invasive interview, subject to lie detection” and aimed to „improve performance in comparison to human agents, as it correctly adapts to traveller’s profiles”—meaning it is personalised to the traveller’s gender, ethnicity and language. In a „unique approach to ‘deception detection’” the system „analyses the micro-gestures of travellers to figure out if the interviewee is lying.” [EU 89]

Besides the fundamental question of the scientific accuracy of lie detectors, the group of 32 people who tested the avatar was not representative and contained the risk of racial bias: the majority were men (22), and they were mainly white Europeans. Only 10 had an Asian or Arab background. After asking 13 questions, the average accuracy of the system in relation to a single question is 75%, meaning there is a potential 25% error rate. [EU 90]

Travellers at this stage are further informed „of their rights and travel procedures“ and provided with „advice and alerts to discourage illegal activity“.

At stage two—at the border itself—border officials „automatically cross-check information, comparing the facial images captured during the pre-screening stage to passports and photos taken on previous border crossings.” Afterwards, the risk score of the traveller is supposed to be recalculated. „Only then does a border guard take over from the automated system.” [EU 91]

One of the stated aims of the system is „to reduce the subjective control and workload of human agents and to increase the objective control with automated means that are non-invasive and do not add to the time the traveller has to spend at the border”. The application further foresees the creation of an additional layer for „bona fide travellers, especially regular travellers into a Schengen-wide frequent traveller programme including a reward system based on number of successful crossings and trouble-free stay”. [EU 87]

The stated technological framework of the iBorderCtrl system [EU 92] is to „empower­ —through technology—the border guard” and it involves:

iBorderCtrl states that „regarding the expected accuracy it would be wrong to expect 100% accuracy from any AI-based deception detection technology, no matter how mature”, iBorderCtrl therefore relies „on many components that address various aspects of the border control procedures, and each provides its own risk estimation for the traveller”. The system then „synthesises a single risk score from a weighted combination of components”. Emphasising the „human-in-the-loop principle”, the makers conclude that „it is highly unlikely that an overall system of which ADDS is a part will lead to ‘an implementation of a pseudoscientific border control.’” [EU 92]

According to the scientists involved, EU funding ensures that economic interests do not play a role. Nevertheless, the so-called „success story” of the Commission on the project ends with:

„[...] ‘in light of the alarming terror threats and increasing terror attacks taking place on European Union soil, and the migration crisis’ […], the partner organisations of IBORDERCTRL are likely to benefit from this growing European security market—a sector predicted to be worth USD 146 billion (EUR 128 bn) in Europe by 2020.“ [EU 91]

1 The project’s first version of its so-called “state-of-the-art report on algorithmic decision-making“ was published too late for consideration in this publication—it is available at https://www.algoaware.eu/wp-content/uploads/2018/08/AlgoAware-State-of-the-Art-Report.pdf
2 The project algo:aware [EU 3] aims to engage with a range of stakeholders and seeks to map the areas of interest where algorithmic operations bear significant policy implications. So far, they have produced a series of reports, blog posts, case studies, infographics and policy developments. They aim to provide a platform for information, and a forum for informed debate on the opportunities and challenges that algorithmic decision-making can provide in commercial, cultural, and civic society settings.
3 The EGE is an independent advisory body of the President of the European Commission.
4 The Committee was set up by the 1957 Rome Treaties in order to involve economic and social interest groups in the establishment of the common market and to provide institutional machinery for briefing the Commission and the Council of Ministers on European issues.
5 The project website was updated just after the editorial deadline. The launch date is planned for 1 January 2019.[EU 21]
6 The group’s first draft of the guidelines were published too late for consideration in this publication; it is available at https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf
7 EDRi add: “Through profiling, highly sensitive details can be inferred or predicted from seemingly uninteresting data, leading to detailed and comprehensive profiles that may or may not be accurate or fair. Increasingly, profiles are being used to make or inform consequential decisions, from credit scoring, to hiring, policing and national security.”
8 Again, the implicit “right to explanation of the system functionality”, or “right to be informed” is restricted by the interests of data controllers and the interpretation and definition of ADM in Art. 22.
9 The Article 29 Data Protection Working Party (WP29) was set up in 1996 by the Directive 95/46/EC as an independent European advisory body on data protection and privacy. With the GDPR in place, the European Data Protection Board (EDPB) replaced the WP29, endorsing its guidance provided on the GDPR. The EDPB has a role to play in providing opinions on the draft decisions of the supervisory authorities. For this purpose,
it can examine—on its own initiative or on the request of one of its members or the European Commission—any question covering the application
of the GDPR.
10 The directive repeals the Council Framework Decision 977/2008/JHA.
11 So far, it is planned to establish a „European Cybersecurity Certification Group“ consisting of representatives of the national authorities responsible for cyber security certification.
12 “High frequency algorithmic trading (HFAT) is a subset of algorithmic trading. Algorithmic trading uses computer algorithms to automatically determine parameters of orders such as whether to initiate the order, the timing, price or how to manage the order after submission, with limited or no human intervention. The concept does not include any system used only for order routing to trading venues, processing orders where no determination of any trading parameters is involved, confirming orders or post-trade processing of transactions.“ [EU 64]
13 The European Union (EU) Regulation (1025/2012) that settles the legal framework for standardisation, has been adopted by the European Parliament and by the Council of the EU, and entered into force on 1 January 2013. [EU 66]
14 When there is a hit in one of the EU security databases or a question is answered positively, the data will be manually checked and the risks individually assessed within four weeks. Persons posing a risk will be refused entry, having the right to appeal.
15 The project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 700626. "The piloting deployment will end in August 2019 after approximately nine months duration. As the hardware and software systems under development in the iBorderCtrl project are not yet authorised law enforcement systems, national legislative authorisation of data processes are not applicable. Informed consent is required to involve test participants and to process their data.“ [EU 87]
16 On the use of ADDS, apart from the research project, the website adds that “in a real border check [it] can not be based on informed consent. A legal basis would be needed, which at present does not exist in the applicable European legal framework.”

Kristina Penner

Kristina Penner is the Executive Advisor at AlgorithmWatch. Her research interests include ADM in social welfare systems, social scoring and the societal impacts of ADM as well as the development of participatory and empowering concepts. Her analysis of the EU border management system builds on her previous experience in research and counselling on the implementation of European and German asylum law. Further experience includes projects on the use of media in civil society and conflict sensitive journalism, as well as stakeholder involvement in peace processes in the Philippines. She holds a master’s degree in International Studies / Peace and Conflict Research from Goethe University in Frankfurt