Civil society reacts to EP AI Act draft Report

Together with civil society partners we analyse in our new joint statement the two main EU parliamentary committees' draft report on the AI Act. In light of our core demands we identify the important steps it takes – and the gaps it still needs to fill so that it protects people and our fundamental rights.

In November 2021, a coalition of civil society organisations released the statement An EU Artificial Intelligence Act for Fundamental Rights. The statement was signed by 123 civil society organisations, and outlined nine recommendations for how the EU AI Act could foreground people and their fundamental rights.

This joint statement is an evaluation of how far the IMCO-LIBE draft Report on the EU’s Artificial Intelligence (AI) Act, released 20th April 2022, addresses those recommendations. We call on Members of the European Parliament to support those amendments which centre people affected by AI systems, prevent harm in the use of AI systems, and offer comprehensive protection for fundamental rights in the AI Act.

1. A cohesive, flexible and future-proof approach to the 'risk' of AI systems

The AI Act’s risk-based approach is in need of serious modifications to ensure flexibility with the fast pace of AI development and innovation. The draft Report begins to address the rigidity of the AI Act’s approach by expanding the scope of delegated acts under Articles 7 and 84 (Amendments 78-80). The original draft of the AI Act only allows the Commission to adopt delegated acts to update the list of high-risk AI systems in Annex III provided they fall within the scope of the existing eight ‘area headings.’ Thankfully, the draft Report extends the scope of delegated acts to allow for the area headings to be modified, or for new headings to be added. These changes are important, as they allow the AI Act to adapt to unforeseen uses of AI that pose a high risk.

Additionally, important amendments have been made to Article 84 to increase the involvement of stakeholders, including civil society, in the consultation process surrounding such updates (Amendments 87-89 and 286). The draft Report maintains the obligation for the Commission to assess the need for updating Annex III on a yearly basis, and adds an amendment to allow for more frequent assessment (Amendments 284-285).

A further positive change in the draft Report is the expansion of the list of high-risk systems in Annex III. Modifications have been made to expand the scope of some existing high-risk use cases (Amendments 287, 288, and 292) as well as new additions for systems that interact with children (Amendment 289), systems that make decisions related to health or life insurance (291), systems related to voting and election campaigning (Amendments 296 and 296) as well as some uses of generative AI systems (Amendment 297).

Despite these positive changes, much work remains to be done to have a well-functioning, future-proofed and democratic risk-based approach. The scope of delegated acts must be expanded to allow for the list of prohibited practices in Article 5 and the list of AI systems in Article 52 to be updated. Without such a possibility to update these risk categories, the AI Act will remain rigid, with two closed lists of systems that could easily become outdated in the fast-moving world of AI development.

Furthermore, the draft Report has neglected to add any new biometric systems to Annex III, despite the fact that systems that utilise our biometric data inherently pose a high risk to fundamental rights. There have also been no amendments in Annex III to expand the number of AI systems that pose a high-risk in migration management, such as predictive analytics, systems used for monitoring and surveillance in border control and biometric identification systems.

2. Prohibitions on AI systems posing an unacceptable risk to fundamental rights

It is vital that the list of ‘prohibited AI practices’ is expanded to cover all systems that are proven to pose an unacceptable risk of violating fundamental rights, and that Article 5 can be updated to account for unacceptable uses in the future (see in section 1). The draft Report does not go far enough towards this aim.

One such unacceptable use of AI is ‘predictive policing’ i.e. the use of AI systems by law enforcement and criminal justice authorities to make predictions, profiles or risk assessments for the purpose of predicting crimes. The introduction of a prohibition on predictive systems is positive to prevent the use of systems that target individuals, undermining the presumption of innocence and reinforcing racial profiling (Amendment 76).

However, the proposed prohibition would arguably not include ‘place-based’ predictive policing systems. This means that EU negotiators remain unclear on the use of AI systems to predict if crimes are likely to be committed in certain neighbourhoods. Yet, place-based predictive policing systems are equally harmful; extensive research has shown how they reinforce discriminatory policing practices based on historical policing patterns, enhance the over-surveillance and criminalisation of racialised and working class communities, and equally challenge the presumption of innocence, on a collective basis.

Improvements have been made to the problematic definitions of biometric categorisation and emotion recognition, largely thanks to the inclusion of a new definition of ‘biometrics-based data’ (Amendment 64) and the modifications of the definition of emotion recognition and biometric categorisation (Amendments 67 & 68). However, the draft Report has not made any further amendments to address the wide range of harms caused by the use of other biometric systems. As recommended in our Civil Society Statement, the AI Act must introduce further bans on emotion recognition, as well as on biometric categorisation systems used to track, categorise and/ or judge people in publicly accessible spaces and systems which amount to AI physiognomy by using data about our bodies to make problematic inferences about personality, character, political and religious beliefs.

Disappointingly, no amendments have been made to improve the existing prohibitions in Article 5 in line with civil society recommendations, including on remote biometric identification (RBI). The disproportionate violation of fundamental rights posed by the use of RBI in public means that nothing short of a full ban on all uses (real-time and post) by all actors (public and private) in publicly accessible spaces is needed – without exceptions.

Moreover, the draft Report fails to address the alarming lack of prohibitions for the use of AI systems in the migration context. In order to ensure the protection of migrants and people on the move, specific prohibitions must be added to Article 5, such as on the use of AI-based individual risk assessment and profiling systems (as they undermine the right to non-discrimination and prejudice the fairness of migration procedures), on AI polygraphs, and on predictive analytic systems used to interdict, curtail and prevent migration.

3. Obligations on users (deployers) of high-risk AI systems to facilitate accountability to those impacted by AI systems

The draft Report takes positive steps in the direction of balancing the responsibilities of providers and users in the AI act. For example, the co-rapporteurs have included data quality (Amendment 99) and human oversight requirements on users when deploying high-risk systems (Amendments 136-8), and require that public authority users of high-risk systems register in the Article 60 public database (see section 4). Further, the report introduces an obligation on all users of high-risk systems to inform persons affected by them if those systems make or assist in making decisions related to them (see section 5). These adjustments provide an important baseline accountability framework for users of high-risk AI.

However, the draft Report omits the crucial obligation to ensure the accountability of users deploying high risk AI systems: fundamental rights impact assessments before deployment. Without this, the AI act will not offer a meaningful framework of accountability of those deploying high-risk AI. Requiring users to conduct and publish their assessment of the likely impact of AI systems on fundamental rights, the environment and the broader public interest is necessary to ensure there is contextual information about how such systems will affect people and society, including full transparency as to how users intend to mitigate those harms. As highlighted by some Member States,1 mandatory impact assessments are a crucial measure to ensure foresight and accountability for potential AI-related harms.

4. Consistent and meaningful public transparency

We welcome the co-rapporteurs’ efforts to improve public access to information about AI systems. The public database foreseen in the initial version of the AI Act would only contain information on high-risk systems registered by their providers. The draft Report takes a good step towards meaningful public transparency as it extends the database to uses of high-risk systems in the public sector (Amendment 172). Hence, not only providers but also users of high-risk systems – if they are public authorities – would be under an obligation to register their use of high-risk AI systems. This will undoubtedly help the public to find out where, by whom and for what purpose high-risk AI systems are actually used and not only stimulate the debate but also allow for public scrutiny over the use of AI. This is key because the risks that come with a system highly depend on the context in which it is used.

However, the draft Report does not go far enough in terms of public transparency over all high-risk use cases in the EU database. It lacks an obligation for private entities to register their use of a high-risk AI system. This leaves a significant gap in public transparency as to how the private sector uses high-risk AI systems, particularly as such systems can produce crucial impacts on people’s lives, rights and well-being, for example in the labour market. Lastly, it is a great step forward that public authorities would need to register uses of high-risk AI, but due to their unique role and responsibilities, this obligation should extend beyond the category of high-risk: information on all uses of AI systems by public authorities, regardless of the systems’ risk level, should be made public in the EU database. Transparency is a necessary – though not yet sufficient – condition to holding not only those who provide but also those who deploy AI systems accountable.

5. Meaningful rights and redress for people impacted by AI systems

We welcome the co-rapporteurs’ proposal to address one of the major gaps of the AI Act – the lack of tools for people affected by AI systems to challenge harmful or discriminatory uses. Amendments 269 and 270 would guarantee that individuals can complain to national authorities and seek a remedy if their health, safety or fundamental rights have been breached. This will help make the requirements and safeguards envisioned by the Act a reality.

We also welcome the introduction of an obligation on users of high-risk AI systems to inform affected people that they are subject to an AI system (Amendment 145). However, this provision should not only include the obligation to provide a mere notification that an AI system is in use, but should also detail additional information that is key for meaningful transparency. At a minimum, affected people should be able to find out about the purpose of the system, what rights they have (e.g. the right to complain mentioned above) and where they can find more information about the functioning and logic of the system. The Parliament should also consider extending these basic transparency rights to other systems which affect people but have not necessarily been classified as high-risk (e.g. systems which personalise prices or services).

Two key mechanisms for ensuring that AI systems are kept to account are also missing in the draft Report. First, the co-rapporteurs must include the right to explanation of individual decision-making. Understanding why a system produced a certain prediction or assessment in an individual case is crucial for challenging discriminatory or otherwise harmful outcomes. At the same time, existing legislation, in particular data protection laws, do not directly envision the right to explanation understood as providing information of how the general logic of the system has been applied in a specific case. Even if such a right is to be interpreted from Article 22 of the General Data Protection Regulation (GDPR), it might only apply to decisions made by AI systems without any human involvement, which will most likely not cover any high-risk AI system.

Second, the draft Report lacks a mechanism for participation of public interest organisations in the investigation and enforcement process by making it possible for them to trigger an investigation into the system in the case of identified violations of fundamental rights. It is vital that the AI Act ensure a full range of rights and redress mechanisms for people affected by AI systems.

6. Accessibility throughout the AI life-cycle

Unfortunately, no improvements were made in the draft Report to ensure accessibility requirements for AI systems. To truly ensure that AI systems are ‘trustworthy’ and work for all people, the AI Act must include accessibility requirements in compliance with the European Accessibility Act (Directive 2019/882), to ensure that accessibility is guaranteed for all AI systems and throughout their deployment. It is not sufficient that accessibility be included only in Codes of Conduct as voluntary measures; accessibility must be mandated for the development and deployment of all AI systems.

Furthermore, accessibility should be mainstreamed throughout the AI Act, including with reference to the information and transparency clauses in the Regulation, the EU database, the rights of natural persons to be notified and seek an explanation, and within any future obligation on fundamental rights impact assessments. Finally, accessibility should be ensured in all consultation and involvement of rights-holders and civil society organisations when implementing this Regulation.

7. Sustainability and environmental protections when developing and using AI systems

Regrettably, the draft Report does not include an obligation for providers and/or users to include information regarding the environmental impact of developing or deploying AI systems. Moving forward, legislators must introduce public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems – a first step towards ensuring that AI systems are developed and used in a resource-friendly way that respects our planetary boundaries.

8. Improved and future-proof standards for AI systems

The draft Report takes some positive, but far from sufficient, steps to improve the standardisation process. Amendments 160 and 161 require that the standardisation and specification processes ensure a balanced representation of interests and participation of relevant stakeholders, a key demand of civil society. Pointing to the major concern that standardisation organisations tend to be opaque private bodies largely dominated by industry actors, civil society demanded that data protection authorities, civil society organisations, SMEs and environmental, consumer and social stakeholders are represented and enabled to effectively participate in AI standardisation and specification-setting processes. It is to be seen how effective this guarantee of meaningful representation in the standardisation process will be. It could be improved by clearer language in Articles 40 and 41 as to the nature of this obligation, including the specific reference to the aforementioned stakeholders from civil society.

Most importantly, however, the draft Report does not address a core flaw of the use of the standardisation process within the AI Act, which is the risk that some fundamental rights and legal issues may be determined by the standardisation process, meaning substantive decisions could be made in an undemocratic way. The Act needs to explicitly set clear and comprehensive political rules on the aspects of the Act that will be subject standardisation, in order to limit the scope of harmonised standards established in Article 40 to what is actually within the competence of standardisation organisations (which is to translate the political decisions of the co-legislators into technical standards).Fundamental rights must not be subject to standardisation. This will ensure that the overall authority to ensure that AI systems comply with fundamental rights principles and perform oversight of non-technical issues remains within the remit of the legislative process and the relevant authorities.

9. A truly comprehensive AIA that works for everyone

The co-rapporteurs make some notable attempts to improve the enforcement of the AI Act, including with an enhanced role and more independence for the EU Artificial Intelligence Board, and an improved reference to the participation and consultation of stakeholders. It is also positive to see the specific inclusion of reporting obligations on national supervisory authorities on uses of prohibited AI systems, as well as misuses (Amendment 247).

However, the draft Report leaves a number of unaddressed flaws with respect to the scope of the AI Act, limiting the potential of the AI Act as legislation to protect the rights of all. Specifically, co-rapporteurs did not amend Article 83 to remove the exemption for AI systems as part of large-scale EU IT databases from the scope of the AI Act. As outlined by civil society, this exclusion allows such systems to avoid necessary oversight. This is damaging for the fundamental rights of people on the move, as the large-scale IT systems at stake affect almost all third-country nationals, and all of these systems involve or intend to involve AI-based systems that would be otherwise classified as ‘high-risk’ under the Regulation. Without any reasonable justification, this exemption presents severe fundamental rights harms and presupposes a differential approach to fundamental rights when migration is the subject matter and people on the move are the rights-holders. The exemption in article 83 must be removed by legislators.

Outstanding issues for the AI Act

Lastly, the draft Report proposes a series of other amendments with potential implications for fundamental rights and the overall working of the AI Act. Firstly, co-rapporteurs propose the removal of the Article 47 derogation from the conformity assessment procedure. This is a welcome improvement, noting that the use of AI for reasons of public security requires more than ever that the system meets the requirements laid out in the Regulation. Moving forward, it is vital that legislators resist attempts to override technical and potential fundamental rights safeguards in the name of ill-defined and broad conceptions of ‘security. The scope of the AI Act must not unduly exclude AI systems used for national security.

Further, the draft Report fortunately does not introduce changes to the definition of AI. While members of other committees (JURI and ITRE), as well as the Slovenian presidency’s compromise text, have attempted to narrow down the definition of AI - despite recommendations to the contrary from civil society and academics - the draft Report is to be commended for keeping the definition broad. This broad scope is essential to safeguard both fundamental rights and innovation, as a narrow definition focused only on machine learning would arbitrarily exclude systems with an impact on people’s rights and disincentivise the use of cutting-edge techniques.

In addition, the draft Report has not added problematic loopholes for so-called general purpose AI systems. While particular treatment for such systems may indeed be advisable, what we have seen to date are problematic loopholes for providers of such systems that would essentially let some of the largest AI providers escape regulatory scrutiny. Any amendments to the text adding specific provisions for general purpose AI systems should add tailored obligations to capture the specific challenges such systems pose, to ensure a balanced and effective regulatory approach.

It is vital that the legislators foreground the concerns of people and issues of fundamental rights in onward negotiations on the AI Act. Moving forward, we urge MEPs to be bold in amending the AI Act to safeguard the rights of people and ensure that AI development and deployment fully respects fundamental rights and democracy.