Algorithmic systems are being increasingly used in all areas of life – but often in very opaque ways. However, when they make important decisions about our lives, their impact is significant: when a wrongful algorithmic indication of social welfare fraud pushes people into financial ruin, when students’ university place is being revoked because of a non-transparent and discriminatory grading system by an algorithm, or when employees receive a criminal conviction by their workplace after an erroneous system claims money went missing from their site – and they can turn lives upside down.
The European Union’s (EU) response to risks stemming from AI is the Artificial Intelligence Act (AI Act). With the law the EU aims to lay down rules for the development and deployment of AI so harms can be prevented or minimized. The other objective of the EU with the AI Act is to harmonize the EU market when it comes to AI and to foster the uptake of the technology within the EU.
The EU Commission’s draft AI Act served as the basis for negotiations for the EU Parliament and Council. The Council has adopted its version of the AI Act, the so-called general approach at the end of 2022. The EU Parliament agreed on its position in June 2023. Currently the three institutions are in the final phase of negotiations (so-called trilogue negotiations) and they attempt to have a final agreement on the legislation by the end of the year.
Here is a guide for you to understand this new regulation and the political processes around it.
Who will be affected by the legislation?
First, it is the people in the EU and our societies at large who will be affected by the legislation as AI systems permeate our everyday life. Second, companies and state authorities that provide or deploy AI systems in the EU (providers and users in the terminology of the regulation) will need to comply with the rules set in it. The AI Act will apply whenever an AI-based system is used in the EU, regardless of where the operator is based – or whenever an output of such a system is used within the EU, regardless of where the system itself is based.
Regrettably, safeguards laid down in the AI Act will only apply to people within the territory of the EU. AI providers based in the EU would still be able to develop systems that will be prohibited under the AI Act and export them to third countries.
Machine Learning only? Scope and applicability
The definition of an ‘AI-based system’ will largely determine which systems will be within the reach of the regulation. In the draft AI Act by the EU Commission, the term was defined rather broadly. The Council of the EU, comprised by the member states, has pushed for a narrow definition by limiting the term only to Machine Learning. This means that many systems that are ‘simpler’ from a technological perspective than those relying on Machine Learning shall not be in the scope of the Act – but those systems’ impact can equally be harmful as those relying on more sophisticated techniques, as evidence shows.
Furthermore, the draft AI Act lays down that the regulation shall not be applicable when AI systems are developed or used for military purposes. The Council extended this limitation by excluding systems from the scope of the Act when it comes to national security – which would allow (illiberal) governments to use biometric mass surveillance or social scoring – even if those are prohibited under the AI Act –in the name and under the cover of ‘national security’.
What’s at stake?
To address the adverse impact of an AI-based system on people’s safety, health, and fundamental rights, the AI Act follows a risk-based approach whereby it sets legal rules according to the respective level of risk: Systems that come with unacceptable risks are prohibited; high-risk systems are subject to certain rules; and AI systems deemed to come with low or minimal risks do not have to face obligations.
It will depend on the regulatory framework of the AI Act how AI-based systems’ adverse impact will be mitigated with safeguards and which – if any – rights and redress mechanisms individuals will have to challenge the predictions and decisions of these systems in case their fundamental rights have been infringed on.
Social scoring, which evaluates the trustworthiness of a person, is banned in the draft AI Act as it is deemed incompatible with EU values. The ban on government-led social scoring in the draft AI Act got extended to private companies by the Council and the EU Parliament. Predictive policing should also be banned according to EU Parliament co-rapporteurs Dragoş Tudorache and Brando Benifei as it “violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination”. The draft AI Act restricts law enforcement agencies’ use of face recognition in public places – however, its current loopholes would still provide room for biometric mass surveillance practices. Germany is one of the few member states which supports a ban on real-time remote biometric identification (RBI) in publicly accessible places – while it would allow post-RBI, which can equally lead to mass surveillance. Members of the EU Parliament (MEPs) agreed on the most stringent ban against surveillance through AI in public spaces: MEPs would ban both real-time remote biometric identification (RBI) and post RBI. These prohibitions on certain applications can, however, still be lifted and their use be legitimized if ‘national security’ is somehow ‘invoked’.
High-risk AI systems – such as those deployed in the area of predictive policing, recruitment, or border control – will need to undergo a conformity assessment by the provider and meet certain requirements throughout their lifecycle.
Transparency & Accountability
The AI Act foresees an EU database in which high-risk AI systems would be registered. Initially, the draft AI Act included the obligation to only register systems that are put on the EU market and would not include information on the context of their deployment. Both the Council and EU Parliament agreed to strengthen the transparency requirements and to oblige AI deployers that are public authorities to register in the EU database the use of a high-risk AI system. Doing so, the general public, watchdog and research organizations would have access to information about the context of uses of high-risk systems by public authorities.
While the proposed fines that would need to be paid in case of non-compliance with the Act are very steep, redress for people impacted by AI systems is nowhere included in the Commission’s draft. Positively, MEPs have introduced some rights and redress mechanisms to empower people negatively affected by AI. These include for instance the right to be informed when one is exposed to a high-risk AI system and the right to explanation when one is impacted by a high-risk AI system.
Legislative process
The EU Commission has disclosed its proposal for an EU Artificial Intelligence Act in April 2021. Once the proposal was revealed, the EU Parliament and Council started to amend the regulation. Unconventionally, there are two committees leading the negotiations in the EU Parliament: the Internal Market and Consumer Protection (IMCO) with co-rapporteur Brando Benifei and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) with co-rapporteur Dragoş Tudorache. In April 2022, the two co-rapporteurs disclosed their draft report on the AI Act – or at least on the parts they could agree on. The Council has adopted its version of the AI Act (the general approach) in December 2022. In June 2023, the EU Parliament agreed on its position on the AI Act. As all three institutions have agreed on their position respectively, they now have entered the final phase of negotiations. Once they find a compromise together, the law can be adopted.
In Germany, the Federal Ministry of Justice and the Federal Ministry for Economic Affairs and Climate Action are responsible for the AI Act. Their positions are introduced in the Council negotiations.
Updated on 15 August 2023.
Read more on our policy & advocacy work on the Artificial Intelligence Act