A guide to the AI Act, the EU’s upcoming AI rulebook you should watch out for

The EU will regulate Artificial Intelligence across all sectors. A process highly fought over by the interests of Big Tech and those seeking to guard fundamental rights and people’s interest. This guide will help you understand the new law, which systems could be banned, and how you might be affected by the new regulation.


26 August 2022 (update: 20 December 2023)

Auf Deutsch lesen

#aiact #eu #explainer #germany #guide

Photo by AbsolutVision on Unsplash

If you want to learn more about our policy & advocacy work on the AI Act, get in touch with

Kilian Vieth-Ditlmann
Deputy Team Lead for Policy & Advocacy
Angela Müller
Angela Müller
Head of Policy & Advocacy | Executive Director AlgorithmWatch CH

Algorithmic systems are being increasingly used in all areas of life – but often in very opaque ways. However, when they make important decisions about our lives, their impact is significant: when a wrongful algorithmic indication of social welfare fraud pushes people into financial ruin, when students’ university place is being revoked because of a non-transparent and discriminatory grading system by an algorithm, or when employees receive a criminal conviction by their workplace after an erroneous system claims money went missing from their site – and they can turn lives upside down.

The European Union’s (EU) response to risks stemming from AI is the Artificial Intelligence Act (AI Act). With the law the EU aims to lay down rules for the development and deployment of AI so harms can be prevented or minimized. The other objective of the EU with the AI Act is to harmonize the EU market when it comes to AI and to foster the uptake of the technology within the EU. Here is a guide for you to understand this new regulation and the political processes around it.

Who will be affected by the legislation?

First, it is the people in the EU and our societies at large who will be affected by the legislation as AI systems permeate our everyday life. Second, companies and state authorities that provide or deploy AI systems in the EU (providers and users in the terminology of the regulation) will need to comply with the rules set in it. The AI Act will apply whenever an AI-based system is used in the EU, regardless of where the operator is based – or whenever an output of such a system is used within the EU, regardless of where the system itself is based.

Regrettably, safeguards laid down in the AI Act will only apply to people within the territory of the EU. AI providers based in the EU would still be able to develop systems that will be prohibited under the AI Act and export them to third countries.

Machine Learning only? Scope and applicability

The definition of an "AI-based system" will largely determine which systems will be within the reach of the regulation. In the draft AI Act by the EU Commission, the term was defined rather broadly. The Council of the EU, consisting of the member states, has pushed for a narrow definition by limiting the term only to Machine Learning. This means that many systems that are "simpler" from a technological perspective than those relying on Machine Learning shall not be in the scope of the Act – but those systems’ impact can equally be harmful as those relying on more sophisticated techniques, as evidence shows. Furthermore, the draft AI Act lays down that the regulation shall not be applicable when AI systems are developed or used for military purposes. The Council extended this limitation by excluding systems from the scope of the Act when it comes to national security – which would allow (illiberal) governments to use biometric mass surveillance or social scoring – even if those are prohibited under the AI Act – in the name and under the cover of "national security."

What’s at stake?

To address the adverse impact of an AI-based system on people’s safety, health, and fundamental rights, the AI Act follows a risk-based approach whereby it sets legal rules according to the respective level of risk: Systems that come with unacceptable risks are prohibited; high-risk systems are subject to certain rules; and AI systems deemed to come with low or minimal risks do not have to face obligations.

It will depend on the regulatory framework of the AI Act how AI-based systems’ adverse impact will be mitigated with safeguards and which – if any – rights and redress mechanisms individuals will have to challenge the predictions and decisions of these systems in case their fundamental rights have been infringed on.

Social scoring, which evaluates the trustworthiness of a person, is banned in the draft AI Act as it is deemed incompatible with EU values. The ban on government-led social scoring in the draft AI Act got extended to private companies by the Council and the EU Parliament. Predictive policing should also be banned according to Dragoş Tudorache and Brando Benifei as it “violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination.” The draft AI Act restricts law enforcement agencies’ use of face recognition in public places – however, its current loopholes would still provide room for biometric mass surveillance practices. Germany was one of the few member states which supported a ban on real-time remote biometric identification (RBI) in publicly accessible places – while it would allow post-RBI, which can equally lead to mass surveillance. Members of the EU Parliament (MEPs) agreed on the most stringent ban against surveillance through AI in public spaces: MEPs would ban both real-time remote biometric identification (RBI) and post RBI. These prohibitions on certain applications can, however, still be lifted and their use be legitimized if ‘national security’ is somehow "invoked."

Consequently, reaching a compromise on prohibited AI systems was one of the most difficult parts of the three-way negotiations between the Council, the EU Parliament, and the Commission. The Council's member states resisted a full ban of real-time public facial recognition, while the Parliament fought for stronger safeguards and narrow exceptions. The compromise agreement includes a ban of biometric categorization based on sensitive characteristics like race and gender, a full ban of emotion recognition in workplaces and education contexts, and a ban of facial image scraping from the internet. An independent assessment of all the prohibition compromises will only be possible once the final wording of the bans will be published.

However, high-risk AI systems – such as those deployed in the area of law enforcement, recruitment, education, or border control – will need to undergo a conformity assessment by the provider and meet certain requirements, including a specific assessment of the impact of fundamental rights of such systems.

Transparency & Accountability

The AI Act foresees an EU database in which high-risk AI systems would be registered. Initially, the draft AI Act included the obligation to only register systems that are put on the EU market and would not include information on the context of their deployment. Both the Council and EU Parliament agreed to strengthen the transparency requirements and to oblige AI deployers that are public authorities to register in the EU database the use of a high-risk AI system. Doing so, the general public as well as watchdog and research organizations would have access to information about the context of uses of high-risk systems by public authorities.

Fines that would need to be paid in case of non-compliance with the Act are €35 million or 7% of the offending company’s global annual turnover (whichever amount is higher) for violations of banned AI applications, €15 million or 3% for violations of the AI Act’s obligations, and €7,5 million or 1,5% for the supply of incorrect information. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. Such a redress option was introduced by MEPs, because it was not included in the Commission’s initial draft.

The legislative process

The EU Commission has disclosed its proposal for an EU Artificial Intelligence Act in April 2021. Once the proposal was revealed, the EU Parliament and the Council started to amend the regulation. Unconventionally, there were two committees leading the negotiations in the EU Parliament: the Internal Market and Consumer Protection (IMCO) with co-rapporteur Brando Benifei and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) with co-rapporteur Dragoş Tudorache. In April 2022, the two co-rapporteurs disclosed their draft report on the AI Act ­– or at least on the parts they could agree on. The EU Commission’s draft AI Act served as the basis for negotiations for the EU Parliament and Council. The Council has adopted its version of the AI Act at the end of 2022, the so-called general approach. The EU Parliament agreed on its position in June 2023. The three institutions then entered the final phase, the so-called trilogue negotiations. They reached a political agreement on the legislation on 8 December 2023 following intense negotiations under high pressure. But the complete legal text still needs to be finalized before the Council and the EU Parliament can officially adopt the law.

In Germany, the Federal Ministry of Justice and the Federal Ministry for Economic Affairs and Climate Action have been responsible for the AI Act. Their positions are introduced in the Council negotiations.

Read more on our policy & advocacy work on the Artificial Intelligence Act

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.