Algorithmic systems are being increasingly used in all areas of life – but often in very opaque ways. However, when they make important decisions about our lives, their impact is significant: when a wrongful algorithmic indication of social welfare fraud pushes people into financial ruin, when students’ university place is being revoked because of an non-transparent and discriminatory grading system by an algorithm, or when employees receive a criminal conviction by their workplace after an erroneous system claims money went missing from their site – and they can turn lives upside down.
The European Union’s (EU) response to harms stemming from AI is the Artificial Intelligence Act (AI Act). It is the first legislation globally aiming to regulate AI across all sectors. The unfolding of this is being attentively watched by other global players as the law will also have an effect outside the EU.
Currently, the EU Parliament and EU Council are each negotiating the draft written by the EU Commission within their own institutions. After this, the three bodies will go into trilogue negotiations.
Here is a guide for you to understand this new regulation and the political processes around it.
Who will be affected by the legislation?
First, all people living in the EU and our societies at large will be affected by the legislation as we will be predominantly impacted by the rules set out for AI-based systems. Second, the impact of the AI Act will not only be felt inside the EU: EU governments use the latest technology for border control to monitor third-country citizens, predominantly people on the move.
Third, companies that provide or deploy AI systems in the EU (providers and users in the terminology of the bill) will need to comply with the rules set in it. The AI Act will apply whenever an AI-based system is used in the EU, regardless of where the operator is based – or whenever an output of such a system is used within the EU, regardless of where the system itself is based.
Machine Learning only? Scope and applicability
The definition of an ‘AI-based system’ will largely determine which systems will be in the reach of the regulation. In the draft AI Act by the EU Commission, the term was defined rather broadly. Currently the Council of the EU, comprised by the member states, is pushing for a narrow definition by limiting the term only to Machine Learning. This would mean that many systems that are ‘simpler’ from a technological perspective than those relying on Machine Learning would not be in the scope of the Act – but those systems’ impact can equally be harmful as those relying on more sophisticated techniques, as evidence shows.
Furthermore, the draft AI Act lays down that the regulation shall not be applicable when AI systems are used for military purposes. The Council and some members of the EU Parliament (MEPs) would extend this limitation by excluding systems from the scope of the Act when it comes to national security – which would allow (autocratic) governments to use biometric mass surveillance or social scoring – even if those are prohibited under the AI Act –in the name and under the cover of ‘national security’.
What’s at stake?
To address the adverse impact of an AI-based system on people’s safety, health, and fundamental rights, the AI Act follows a risk-based approach whereby it sets legal rules according to the respective level of risk: Systems that come with unacceptable risks are prohibited; high-risk systems are subject to certain rules; and AI systems deemed to come with low or minimal risks do not have to face obligations.
It will depend on the regulatory framework of the AI Act how AI-based systems’ adverse impact will be mitigated with safeguards and which – if any – rights and redress mechanisms individuals will have to challenge the predictions and decisions of these systems in case their fundamental rights have been infringed on.
Social scoring, which evaluates the trustworthiness of a person, is banned in the draft AI Act as it deemed incompatible with EU values. The ban is only on government-led social scoring and does not include social scoring conducted by private companies. Predictive policing should also be banned according to EU Parliament co-rapporteurs Dragoş Tudorache and Brando Benifei as it “violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination”. The draft AI Act restricts law enforcement agencies’ use of face recognition in public places – however, with its current loopholes would still provide room for biometric mass surveillance practices. Germany is one of the few member states which would support a stronger ban and would rule out face recognition in publicly accessible places. These prohibitions on certain applications can, however, still be lifted and their use be legitimized if ‘national security’ is somehow ‘invoked’.
High-risk AI systems – such as those deployed in the area of predictive policing, recruitment, or border control – will need to undergo a conformity assessment by the provider and meet certain requirements throughout their lifecycle.
Transparency & Accountability
The AI Act foresees an EU database in which high-risk AI systems would be registered. The database would only register systems that are put on the EU market and would not include information on the context of their deployment. Some MEPs together with co-rapporteur Brando Benifei call for an obligation to register also the application of a high-risk system when the user is a public authority or someone acting on their behalf.
While the proposed fines that would need to be paid in case of non-compliance with the Act are very steep, redress for people impacted by AI systems is nowhere included in the Commission’s draft. As it currently stands, the regulation does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. MEPs currently push for rights and redress mechanisms for affected people such as: the right not to be subject to a non-compliant AI system, the right to explanation for the reasons any decision taken by the high-risk AI system, or the right to lodge a complaint with a supervisory authority.
The EU Commission has disclosed its proposal for an EU Artificial Intelligence Act in April 2021. Once the proposal was revealed, the EU Parliament and Council started to amend the bill. Unconventionally, there are two committees leading the negotiations in the EU Parliament: the Internal Market and Consumer Protection (IMCO) with co-rapporteur Brando Benifei and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) with co-rapporteur Dragoş Tudorache. Recently, in April 2022, the two co-rapporteurs disclosed their draft report on the AI Act – or at least on the parts they could agree on. MEPs have tabled over 3000 amendments to the draft report which will be merged into compromise amendments and supposed to be voted on within the committees in October. The plenary, when all MEPs in the EU Parliament vote on its version of the bill, is scheduled for November while the Council is aiming for having an agreement by December. Once they both reached a consensus within their institutions, the trilogues will start which will be closed with a final agreement.
In Germany, the Federal Ministry of Justice and the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection are responsible for the bill. Their positions are introduced in the Council negotiations.