A guide to the AI Act, the EU’s new AI rulebook

In 2024, the EU adopted a law to regulate Artificial Intelligence across all sectors. The regulation is a humble step forward towards protecting people’s fundamental rights from risks stemming from AI. In this guide we will unwrap what the new rules entail and what it will – and won’t – change.

Blog

26 August 2022 (update: 4 July 2024)

Auf Deutsch lesen

#aiact #eu #explainer #germany #guide

Photo by AbsolutVision on Unsplash

If you want to learn more about our policy & advocacy work on the AI Act, get in touch with

Kilian Vieth-Ditlmann
Deputy Team Lead for Policy & Advocacy
Nikolett Aszódi
Policy & Advocacy Manager

Algorithmic systems are being increasingly used in all areas of life. They can shape our lives to a great extent and have implications to our fundamental rights. If car insurance companies charge people with nearly identical driving profiles differently based on their ethnicity, they violate the right to non-discrimination. If corporations and state authorities deploy AI systems to monitor public spaces, assess people based on their appearance, and make decisions based on the data they gathered, they might violate the right to privacy as well as the right to freedom of speech and assembly. In both cases, substantial power asymmetries are created.

Essentially, the EU Artificial Intelligence Act (AI Act) falls short on protecting fundamental rights in a meaningful way. There are big loopholes in the law. Law enforcement as well as security and migration bodies could exploit them as necessary safeguards are lacking in these areas. Also, as an outcome of heavy industry lobbying, developers of AI systems are to decide whether their systems pose a high-risk or not – and as a result have the opportunity to circumvent most requirements set by the regulation. Still, the safeguards which made it into the act are a step forward towards the protection of people and their rights against AI harms. In our brief guide, we outline the act’s significance and what the new rules entail.

General purpose and process

The AI Act is just one piece in the EU’s puzzle to govern algorithmic systems and the digital sector in general. As a product safety regulation, the bulk of rules set by the act is targeting AI systems and those who develop them (aka provider in the AI Act’s terminology), focusing mainly on AI’s technical aspects. The regulation devotes less attention to socio-political implications – such as power imbalances between those who implement these systems and those who are impacted by them.  

After a grueling negotiation process, the European Commission, the European Parliament, and the Member States reached a political deal on the act in December 2023. It was formally adopted in March 2024. The law is expected to come into effect over the course of the summer. Once it is signed into law, the provisions regarding prohibited applications will apply after six months. The regulation will have an impact beyond the EU: AI providers who place AI systems on the EU market will be subject even if they are located outside the EU.

Globally, the general trend is to adopt decentralized and sector-based regulation. In the US, for example, there is no comprehensive, federal level law which directly regulates AI. Several agencies acting independently from each other implement existing regulations. The AI Act follows its own path in regulating AI in a centralized effort and across all sectors.

Scope: What do the new rules entail?

The AI Act sets rules for AI systems according to their risk level. Banned are systems that pose unacceptable risks, such as systems to purposefully manipulate people. “High-risk systems” require a certain level of transparency and risk mitigation measures. The law also mandates specific transparency requirements for applications that can produce so-called deepfakes or tools that categorize people based on biometric data. Systems that are considered less risky for health, safety, and fundamental rights won’t be affected by the law – such as AI-based photo-editing applications or email spam filters.

The law will not apply in the areas of military, defense, and national security. If governments invoke the national security exception, they can even use prohibited AI applications without any of the safeguards foreseen by the act. The act also allows the export of AI applications to third countries from the EU which are prohibited under the law, by which a double standard in regard to human rights is applied.

More transparency and accountability, sometimes

People will need to be notified when they are directly interacting with AI systems such as chatbots – unless this is obvious from the context. They will also need to be informed when they are exposed to emotion recognition and biometric categorization systems, unless the police are using such systems. So-called deepfakes require a disclosure that they were artificially generated.

The AI Act requires the introduction of an EU-database for high-risk AI systems on the market that are used by public bodies. This shall ensure that the public and public interest organizations are informed about the use context of AI systems. By obtaining information about where such systems are used by whom and for what purposes, academics and organizations can conduct public interest research about the systems’ impact. However, systems used in law enforcement, migration, and border control will not be disclosed to the public. Those will be listed in a separate, non-public database.

Before high-risk AI systems are deployed in the public sector, a fundamental rights impact assessment needs to be conducted. The purpose of such an assessment is to identify the risks that a given AI system’s use may entail and to fathom whether they are acceptable under fundamental rights law. The systems should not be deployed if the risks are not acceptable or mitigating them is not possible. The law does not include a provision to mitigate the risks deployers identified in impact assessments. The assessments' effectiveness is questionable if only the risk identification is mandatory but not the mitigation.

If individuals reckon that they have been significantly and adversely impacted by a decision based on an AI system, they have the right to obtain an explanation about the decision-making process. Individuals also have the right to lodge a complaint with national authorities if they think that the regulation has been infringed upon. It is yet to be seen how feasible it will be to get a meaningful explanation from the deployer and if the authorities will be able to enforce compliance.

Fines that would need to be paid in case of non-compliance with the Act are €35 million or 7% of the offending company’s global annual turnover (whichever amount is higher) for violations of banned AI applications, €15 million or 3% for violations of the AI Act’s obligations, and €7,5 million or 1,5% for the supply of incorrect information.

Insufficient protection: law enforcement and security

In the area of law enforcement, the regulation partially bans the use of AI systems in public spaces if they can identify people based on their biometric data – such as face recognition. There are some basic safeguards against biometric identification in real-time. The bar set by the AI Act for allowing post remote biometric identification in public spaces (when the identification happens sometime after the recording) on the other hand is very low. The suspicion of any crime can justify such a system be used.

Categorizing biometric data to deduce race, political believes, or sexual orientation is prohibited, while some forms of biometric categorization remain legal under the AI Act, gender categorization for example. Some prohibitions reflect power imbalances in place: Emotion recognition, for instance, is banned in employment and education but allowed for law enforcement and migration authorities, which could lead to such systems being used against marginalized people and communities.

None of the bans meaningfully apply to the migration context. The list of high-risk systems fails to capture the many AI systems used there. It excludes harmful systems such as fingerprint scanners or forecasting tools used to predict, interdict, and curtail migration. There are also exemptions related to the transparency obligations in the migration context, which undermines effective public scrutiny.

The use of biometric applications can ultimately lead to mass surveillance and the violation of fundamental rights such as the right to privacy, free expression, and peaceful protest. It also intensifies the power imbalances between those who watch and those that are being watched.

Unfortunately, other international regulations on AI follow the AI Act’s approach. The recently adopted Council of Europe’s Convention on Artificial Intelligence, for instance, will not be applicable if AI systems are developed or used under the guise of “national security.” In addition, states can choose how to apply the convention to private companies – which can even be reducing it to non-binding measures.

What’s next?

The AI Act is a step forward in the governance of AI. The task now is to ensure the effective implementation of the regulation as well as to fill the legislative gaps.

Many of the rules laid out in the AI Act need to be specified, standardized, and implemented. The next three years will be decisive for the enforcement of the AI Act, with different EU institutions, national lawmakers, and even company representatives setting standards, publishing interpretive guidelines, building oversight structures, and driving the Act’s implementation across the EU’s member countries. Let’s hope that this work is not done behind closed doors.

Read more on our policy & advocacy work on the Artificial Intelligence Act

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.