Civil society open letter demands to ensure fundamental rights protections in the Council position on the AI Act
Negotiations in the Council of the EU regarding the AI Act are at full speed. Civil society calls for the Council to correct major shortcomings of their version of the Act which fails to ensure the necessary safeguards against harmful applications of AI systems.
The Council of the EU is in full swing with finding an agreement regarding the AI Act. However, the text in its current form contains major shortcomings which – unless they are corrected – would increasingly pose a threat to our fundamental rights and democracy. Together with civil society we are addressing Ivan Bartoš, Czech Minister for Digitisation and Ministers of member states with an open letter to make steps towards ensuring the Council’s version of the AI Act will protect people and our societies from the risks stemming from AI systems. It is currently Czechia which holds the presidency in the Council – and under whose presidency the Council might reach an agreement on the AI Act by the end of this year. After both the Council and EU Parliament have reached an agreement within their institutions, the trilogue together with the EU Commission will begin to reach provisional agreement on the Act.
Open letter
Dear Ivan Bartoš, Deputy Prime Minister for Digitisation,
EU Ministers of Telecoms and Digitalisation,
We, the undersigned organisations, are writing to bring to your attention a number of serious shortcomings in the Council position on the Artificial Intelligence Act (AI Act) (COM/2021/206) currently under negotiation by the Czech Presidency of the Council of the European Union. This communication builds on the position of 123 civil society organisations calling for the European Union to foreground a fundamental-rights based approach to the AI act in 2021.
In particular, we draw your attention to (a) Essential fundamental rights protections currently missing from the Council’s position, and (b) Key shortcomings of the Council position from a fundamental rights perspective.
(A) Essential fundamental rights protections currently missing from the Council’s approach to the AI Act
The Council is yet to incorporate several necessary features of a fundamental rights-based approach in the AI Act. We call on the Czech Presidency and Member States to ensure that the following features are reflected in the final Council position:
- Rights and redress mechanisms to empower people affected by AI systems. Whilst the Council takes a positive step to enable individuals to submit complaints in the event of non-compliance, further steps are necessary to empower people affected by AI systems to understand, challenge and seek redress. This includes:
- The right to be provided on request with clear and intelligible information, including for children and non-native speakers, in a manner that is accessible for persons with disabilities, about decisions taken with the assistance of systems within the scope of the AI Act;
- A right not to be subject to non-compliant AI systems;
- The right for public interest organisations to lodge a complaint with a supervisory authority for a breach of the Regulation or for AI systems which undermine fundamental rights or the public interest;
- The right to an effective remedy against the national supervisory authority or user for those whose rights under the Regulation have been infringed.
- Meaningful accountability and public transparency obligations on public uses of AI systems and all ‘users’ of high-risk AI. To ensure the highest level of fundamental rights protection, those deploying high-risk AI systems (i.e. ‘users’) must provide public information about the use of such systems. Such information is crucial to public accountability, allowing public interest organisations, researchers and affected persons to understand the context of which high-risk systems are deployed. The following obligations on users must be included in the AI Act:
- An obligation on users to to register all high-risk deployments of AI systems in the Article 60 database;
- An obligation on public authorities, or those acting on their behalf, to register all uses of AI systems in the Article 60 database, regardless of risk level;
- An obligation on users of high-risk AI systems to conduct and publish in the Article 60 database a fundamental rights impact assessment (FRIA) before deploying any high- risk AI system.
- Ensuring meaningful and balanced civil society participation in all aspects of the AI Act. This includes explicitly ensuring meaningful participation of civil society organisations within the standardisation process, governance and enforcement, and processes for updating the lists of systems in Annex III and other risk categories. Fundamental rights and accessibility experts, experts with lived experience, as well as domain experts in the areas covered in Annex III, i.e. workers’ rights, education, migration, must be assured equal participation and weight in the consultation and decision-making processes for the AI Act.
- Accessibility requirements for providers and users of all AI systems throughout the life cycle. To avoid that inaccessible AI systems lock persons with disabilities out of social participation, increase inequality gaps and infringe their human rights, it is necessary to go beyond voluntary measures such as codes of conduct and establish mandatory requirements consistent with the European Accessibility Act (EAA). These requirements must also include all AI-Information and related manuals.
- Comprehensive prohibitions on all AI systems posing an ‘unacceptable risk’ to fundamental rights. It is vital that the Article 5 list of ‘prohibited AI practices’ is extended to cover all systems that are proven to pose an unacceptable risk of violating fundamental rights, such as:
- A full prohibition (without exceptions) on remote biometric identification (RBI) to apply to all actors, not just law enforcement, as well as to both ‘real-time’ and ‘post’ uses;
- The use of AI systems by law enforcement and criminal justice authorities to make predictions, profiles or risk assessments for the purpose of predicting crimes;
- The use of AI-based individual risk-assessment and profiling systems in migration context; predictive analytics for the purpose of interdicting, curtailing and preventing migration; and AI polygraphs for migration management purposes;
- The use of emotion recognition systems that claim to infer people’s emotions and mental states;
- The use of biometric categorisation systems to track, categorise and judge people in publicly accessible spaces; or to categorise people on the basis protected characteristics.
(B) Key shortcomings of the Council position from a fundamental rights perspective
The following features of the Council’s position risk undermining the effective protection of people and fundamental rights from high-risk AI systems:
- Narrowing of the definition of AI systems: The Council has proposed to limit the definition of AI to systems operating with 'a certain level of autonomy', and that use either machine learning and or logic or knowledge-based approaches. The concern here is that the idea of autonomy here is highly vague, and that simpler, rule-based systems, such as the Dutch SyRI system (composed of a spreadsheet and programming script to create risk profiles), and other technologically limited and yet harmful on fundamental rights grounds, would be excluded from the AI Act. Further, such a proposal would also stifle innovation by only applying obligations to systems using advanced techniques, thereby dis-incentivising their use compared to traditional software approaches. We propose therefore that the original broad scope of the Commission’s definition be preserved, and the use of vague terms such as ‘autonomy’ be avoided.
- Changes to the classification of ‘high-risk’ AI systems. The Compromise text limits the classification of high-risk systems only to cases where the output is ‘not purelyaccessory’ in respect of the relevant action or decision to be taken. This would severely complicate the process of risk-classification under the AI Act, leaving discretion to providers to decide in abstract if these systems will be used in an ‘accessory’ manner or not, severely compromising legal certainty necessary to the successful operation of the AI Act. There is a real danger that modifications to Article 6, such as those in the current Council compromise text, will create loopholes for companies able to argue that their systems do not fit the criteria. Instead, the Commission’s original form of Article 6 should be maintained. We propose to remove new article 6(3) from the compromise text.
- Loopholes to public transparency for high-risk AI systems in law enforcement and migration: Proposals to exempt law enforcement and migration authorities from the registration obligations in Article 29 (5) or from providing information about the ‘intended purpose of the AI system’ in Part II (4) of Annex VIII prevent effective public transparency about the use of high-risk systems in the areas which have perhaps the most severe impact on fundamental rights. This is concerning as public transparency is necessary to effective oversight, particularly in the areas of law enforcement and migration where a number of fundamental rights are at stake. The CZ Second text also proposes specific carve-outs for the use of ‘sensitive operational data’ by law enforcement authorities, without defining the scope of this data. The vagueness of these carve-outs would lead to blanket exemptions for AI systems used by law enforcement and migration authorities from transparency obligations and further weaken an effective public oversight. We propose removing these transparency exemptions from the Compromise text.
- Broad proposed exemption from scope for AI for national security purposes: Article 2(3) of the current proposal excludes AI systems developed or used for national security purposes from the scope of the AI Act. Without proper regulatory safeguards, intrusive AI- based technologies – including with mass surveillance outcomes – could be used in the public sector with no special limitations or safeguards whenever “national security” grounds are invoked. We propose substituting the blanket exemption for national security purposes for an exemption for AI systems falling outside of the scope of Union law.
- Lack of assurance of fundamental rights expertise in the national enforcement authorities: the Council text dilutes the initial Commission proposal for article 59(4) to ensure a sufficient number of permanent personnel with in-depth understanding of fundamental rights, as well as health and safety risks. This change will lead to increased burden on national bodies and insufficient enforcement of fundamental rights safeguards due to lack of assured expertise within the authorities. We therefore recommend keeping the original wording of the article, and complementing it with a requirement to assess and plan for the financial implications of the AI Act, so as to ensure enforcement bodies and other relevant bodies have the resources to meaningfully fulfill their tasks under the AI Act.
We sincerely hope that, in your responsible capacities, you take the necessary steps to ensure the concerns outlined in this letter are addressed.
We look forward to exchanging further with you to discuss how the Council can ensure that the Compromise on the AI Act maintains the highest level of fundamental rights protection.
Sincerely,
Access Now
AlgorithmWatch
Amnesty Tech
European Digital Rights (EDRi)
European Disability Forum
European Center for Not-for-Profit Law (ECNL)
Fair Trials
Homo Digitalis
Irish Council for Civil Liberties (ICCL)
Panoptykon Foundation
PICUM - Platform for International Cooperation on Undocumented Migrants
Read more on our policy & advocacy work on the Artificial Intelligence Act