Final EU negotiations: we need an AI Act that puts people first
As the final stage of negotiations on the Artificial Intelligence Act (AI Act) enters AlgorithmWatch together with 149 civil society organisations call on EU institutions to improve the regulation. In our civil society statement we spell out clear suggestions how the regulation should be changed in order to protect people and our human rights effectively.
In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare,education and employment.
Without strong regulation, companies and governments will continue to use AI systems that exacerbatemass surveillance, structural discrimination, centralised power of large technology companies,unaccountable public decision-making and environmental damage.
We call on EU institutions to ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:
1. Empower affected people with a framework of accountability, transparency, accessibility and redress
- An obligation on all public and private ‘users’ (deployers) to conduct and publish a fundamental rights impact assessment before each deployment of a high-risk AI system and meaningfully engage civil society and affected people in this process;
- Require all users of high-risk AI systems, and users of all systems in the public sphere, to register their use in the publicly viewable EU database before deployment;
- Ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU.
- Ensure horizontal and mainstreamed accessibility requirements for all AI systems;
- Ensure people affected by AI systems are notified and have the right to seek information when affected by AI-assisted decisions and outcomes;
- Include a right for people affected to lodge a complaint with a national authority if their rights have been violated by the use of an AI system;
- Include a right to representation of natural persons and the right for public interest organisations to lodge standalone complaints with a national supervisory authority;
- Include a right to effective remedies for the infringement of rights.
2. Draw limits on harmful and discrimiatory surveillance by national security, law eforcement and migration authorities
Increasingly AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. Such systems disproportionately target already marginalised communities, undermine legal and procedural rights, as well as contributing to mass surveillance. When AI systems are deployed in the context of law enforcement, security and migration control, there is an even greater risk of harm, and violations of fundamental rights and the rule of law. To maintain public oversight and prevent harm, the EU AI Act must include:
- A full ban on real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception;
- A prohibition of all forms of predictive and profiling systems in law enforcement and criminal justice (including systems which focus on and target individuals, groups and locations or areas);
- Prohibitions on AI in migration contexts to make individual risk assessments and profiles based on personal and sensitive data, and predictive analytic systems when used to interdict, curtail and prevent migration;
- A prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected attributes as well as the use of any biometric categorisation and automated behavioural detection systems in publicly accessible spaces;
- A ban on the use of emotion recognition systems to infer people’s emotions and mental states;
- Reject the Council’s addition of a blanket exemption from the AI Act of AI systems developed or used for national security purposes;
- Remove exceptions and loopholes for law enforcement and migration control introduced by the Council;
- Ensuring public transparency as to what, when and how public actors deploy high-risk AI in areas of law enforcement and migration control, avoiding any exemption to the obligation to register high risk uses into the EU AI database.
3. Push back on Big Tech lobbying: remove loopholes that undermine the regulation
The EU AI Act must set clear and legally-certain standards of application if the legislation is to be effectively enforced. The legislation must uphold an objective process to determine which systems are high-risk, and remove any ‘additional layer’ added to the high-risk classification process. Such a layer would allow AI developers, without accountability or oversight, to decide whether or not their systems pose a ‘significant’ enough risk to warrant legal scrutiny under the Regulation. A discretionary risk classification process risks undermining the entire AI Act, shifting to self-regulation, posing insurmountable challenges for enforcement and harmonisation, and incentivising larger companies to under-classify their own AI systems.
Negotiators of the AI Act must not give in to lobbying efforts of large tech companies seeking to circumvent regulation for financial interest. The EU AI Act must:
- Remove the additional layer added to the risk classification process in Article 6 restore the clear, objective risk-classification process outlined in the original position of the European Commission;
- Ensure that providers of general purpose AI systems are subject to a clear set of obligations under the AI Act, avoiding that smaller providers and users bear the brunt of obligations better suited to original developers.
Drafted by:
- European Digital Rights (EDRi)
- Access Now
- Algorithm Watch
- Amnesty International
- Bits of Freedom
- Electronic Frontier Norway (EFN)
- European Center for Not-for-Profit Law, (ECNL)
- European Disability Forum (EDF)
- Fair Trials
- Hermes Center
- Irish Council for Civil Liberties (ICCL)
- Panoptykon Foundation
- Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)
Read more on our policy & advocacy work on the Artificial Intelligence Act