Press release

AI Act about to finally become law?

Berlin/Brussels, 2 February 2024. Today, the EU’s 27 Member States have to decide whether to adopt the AI Act. If they give their go-ahead, some AI systems will have to undergo fundamental rights impact assessments, and in certain cases, people will be able to launch complaints when they are affected by AI. At the same time, human rights protection will suffer from various exceptions for high-risk systems used in the contexts of national security, law enforcement, and migration.

The States will vote on the final compromise text of the AI Act in the committee composed of each country's deputy permanent representatives to the EU (called COREPER I). This can be considered a pivotal vote, because if the AI Act is not backed by a majority of Member States today, the entire legislative schedule is likely to unravel before the EU elections, which would result in significant delays.

So far, some but not all Member State governments have announced how they want to vote in this meeting. The AI Act is expected to be accepted by a majority in the COREPER meeting. If so, the EU Council – which is formed by the competent ministers of all Member State governments and that holds the official decision-making powers – can proceed to vote on the Act. After the Council vote, the European Parliament (EP) will also cast a final vote in the competent committees and in the plenary. Once a majority in both legislative bodies, the Council and the EP, has voted in favor, the AI Act is published in the EU’s Official Journal and becomes law.

Achievements and shortcomings

Thanks to the intense advocacy efforts of civil society organizations, the Act now foresees a mandatory fundamental rights impact assessment and public transparency duties for deployments of high-risk AI systems – key demands that AlgorithmWatch has been fighting for over the last three years. People affected will also have the right to an explanation when their rights were affected by a decision based on a high-risk AI system and will be able to launch complaints about them.

These big wins are weakened by major loopholes, such as the fact that AI developers themselves have a say in whether their systems count as high-risk. Also, there are various exceptions for high-risk systems used in the contexts of national security, law enforcement, and migration, where authorities can often avoid the reach of the Act’s core provisions.

Opposing forces still at work

Some Member States, in particular France and Germany, have challenged the AI Act in the very last moments. They have argued that the AI Act will be too restrictive, specifically in terms of the rules on general purpose AI systems (GPAI), and both countries raised issues relating to law enforcement agencies. France in particular wanted to have very little transparency and oversight of police and migration control agencies’ use of AI systems. It has faced allegations of industry lobbying with conflict of interest in relation to the country’s strong stance against regulation of GPAI. But both the government of Germany and France will most likely withdraw from their objections and vote for the adoption of the act.

Angela Müller, Head of Policy & Advocacy at AlgorithmWatch comments: “The final AI Act compromise reveals a systemic flaw within EU law-making: a disproportionate influence held by national governments and law enforcement lobbies, outweighing those championing public interest and human rights.”

The AI Act will introduce some important improvements designed to enhance technical standards and increase accountability and transparency for the use of high-risk AI. However, there are also severe causes for concern with the final AI Act text. Indeed, it may contribute to expanding and legitimizing the surveillance activities of police and migration control authorities, undermining fundamental rights in the face of AI systems.

Kilian Vieth-Ditlmann, Deputy Head of Policy & Advocacy at AlgorithmWatch, concludes: “The AI Act establishes important basic transparency obligations but fails to provide adequate protections against biometric mass surveillance.”

For interview requests, please contact:
Kilian Vieth-Ditlmann
Deputy Team Lead for Policy & Advocacy at AlgorithmWatch
E-Mail: vieth-ditlmann@algorithmwatch.org
Phone.: +49 30 99 40 49 003