Negotiations about the Artificial Intelligence Act (AI Act) are approaching a final phase in the European Parliament. Once the Parliament agreed on its position, the trilogue (inter-institutional) negotiations among the EU Council and Commission will begin. In this crucial phase, AlgorithmWatch together with European Digital Rights (EDRi), Access Now and many other civil society organizations call on Members of the European Parliament to take a position regarding the AI Act, which foregrounds people’s interest and the protection of our fundamental rights.
Civil society’s demands
➢ Empower people affected by AI systems
One of the main objectives of the AI Act is to lay down rules for ‘AI’ to protect people against risks stemming from it. However, in the original draft of the regulation, people are not equipped with the means necessary to validate their rights. Therefore, it is essential, people who are affected by AI systems have the information about such, and the possibility to challenge when they believe they rights have been violated.
➢ Ensure accountability and transparency for the use of AI
The draft AI Act takes some good steps towards ensuring transparency and accountability when AI systems are being developed, but not when they are used. We demand that those who deploy a high-risk system or whenever the public sector deploys an AI system have the obligation to register in a publicly accessible database. Since the AI Act rather focuses on tackling risks stemming from AI on a technical level, we call for an obligation on deployers of high-risk systems to conduct a fundamental rights impact assessment (FRIA) before they deploy the system.
➢ Prohibit AI systems that pose an unacceptable risk to fundamental rights
While the draft AI Act designated certain systems that pose unacceptable risks from a fundamental rights perspective and therefore should be prohibited, it takes a very weak approach to banning those systems. Any form of biometric identification and categorization which can lead to mass surveillance in our public spaces must be fully banned. Flawed, discriminatory AI systems such as predictive policing tools or emotion recognition must be prohibited, too.
Read the letter with our full demands.
Drafted by: European Digital Rights, Algorithm Watch, Access Now, Amnesty International, Article 19, Bits of Freedom, Electronic Frontier Norway (EFN), European Center for Not-for-Profit Law, (ECNL), European Disability Forum, Fair Trials, Homo Digitalis, Irish Council for Civil Liberties (ICCL), Panoptykon Foundation, Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM).