Press release

EU Parliament vote on AI Act: Lawmakers chose to protect people against harms of AI systems

After long months of intense negotiations, members of the European Parliament voted on the EU’s Artificial Intelligence Act (AI Act). AlgorithmWatch applauds the Parliament for strengthening fundamental rights protection against the negative impacts of AI, such as that of face recognition in public spaces. Yet, the Parliament missed the opportunity to enhance protection for some people who would need it most.

Nikolett Aszódi
Policy & Advocacy Manager

The landmark vote signals the EU Parliament's position ahead of the trilogue negotiations with the EU council (which consists of the EU member states) and the Commission.

One of the most controversial and heightened issues concerning the vote was remote biometric identification (RBI) such as face recognition. Going beyond the proposal of the Commission in 2021, which would have left an array of exceptions to the ban of RBI, Members of the Parliament decided to put people first: They suggest to comprehensively ban the real-time use of RBI and to allow for post-RBI (where the biometric analysis does not take place in real-time but afterwards) only on condition of very strong safeguards. The majority thereby opposed a last-minute attempt of right-wing Members of the Parliament to water down the ban. With such a comprehensive ban, our public spaces will be effectively protected against intrusive, permanent forms of mass surveillance. This is complemented by the Parliament’s vote for a ban on discriminatory biometric categorization and on systems used for emotion recognition in the area of law enforcement, education, border control, and the workplace. The Parliament also agreed to ban predictive policing based on the profiling of people.

Enhanced transparency and accountability

According to the Parliament, if public authorities deploy high-risk AI systems, they should be obliged to register this use in an EU-wide database and thereby provide information about the deployment context. This would enable the public to have a better understanding of where these systems are used by whom and to what purpose. This might reveal how individuals and society are affected by these systems. Furthermore, all deployers of high-risk AI systems should be obliged to conduct a fundamental rights impact assessment before deployment – which, in case of public institutions, will be made public. With these two requirements, the Parliament follows two of AlgorithmWatch’s core demands: “By subjecting those who deploy AI systems under public scrutiny, we could finally shed light into the black box of how algorithms affect people and society. This is a great success, and an important step towards democratic control over the ways in which AI is used in our society,” concludes Angela Müller, Head of AlgorithmWatch’s Policy & Advocacy team. 

Equally important is the Parliament’s call for considering questions of sustainability around AI systems. Deployers of high-risk AI will need to conduct an assessment on the environmental impact before using a system – and foundation models like those behind ChatGPT will have to comply with European environmental standards.

Missed opportunity to protect people who would need it most

Disappointingly, lawmakers did not vote to extend the protection of people on the move: They did not adopt a ban on systems which try to profile people based on their sensitive characteristics or on predicted data in order to assess whether they pose a threat in the area of migration, asylum, and border control. Also regrettably, policymakers did not vote to grant the right for people to be represented by a public interest organization to lodge a complaint when negatively affected by an AI system. 

These shortcomings shall be corrected in the upcoming trilogue negotiations between EU Parliament, Member States, and the Commission, scheduled to start right after today’s vote. “Overall, Members of Parliament gave a strong signal that they prioritize the protection of people and their fundamental rights over big tech’s interests and state overreach. They went much further than the EU Commission and Council in laying down safeguards for the development and use of AI,” says Nikolett Aszódi, Policy & Advocacy Manager at AlgorithmWatch.

For more information and interview request please contact:

Nikolett Aszódi
aszodi@algorithmwatch.org
+49 (0)30 99 40 49 001

Read more on our policy & advocacy work on the Artificial Intelligence Act