Up for vote today is the “general approach” of the Artificial Intelligence Act (AI Act). This is the text that member states need to agree to in their forum, the Council of the EU, to start negotiating what the law will finally look like vis-à-vis the European Parliament and the European Commission. For the so-called trilogues with these two other bodies – scheduled to begin next year – the general approach is an immensely important document. The further it is from the Parliament’s position, the more contested the trilogues will be – possibly to the point that the AI Act’s adoption would be stalled for many years, if not abandoned altogether.
Lavish exemptions for security agencies
Originally, based on the Commission’s draft AI Act, all high-risk AI systems that exist on the market would have to be registered by the provider in the EU database the AI Act foresees. As AlgorithmWatch and other human rights defenders called for, a meaningful transparency regime would also entail information on how these systems are actually used in practice. This use of AI in practice can entail significantly different – and significantly higher – risks compared to what can be known from the mere description of the system in the database supplied by the provider (the company developing and selling the system).
Contrary to its agreement laid down in the coalition treaty and in a sharp divergence from statements made earlier in the process, the German government has backtracked on positions that would help to protect people’s rights if affected by AI-based systems.
Germany had recognized the importance of making transparent the actual uses of AI systems. In the government’s position paper from March of this year, it stressed that “due to the unique role and responsibility public authorities bear […] public authorities should be subject to more stringent transparency requirements when using AI systems”, and called for the registration of all AI uses by public authorities in the EU database, regardless of risk category. Germany was active in pushing for the registration of certain high-risk uses by public authorities in the Council negotiations, which would facilitate greater transparency for the contexts in which these systems are used.
Seemingly in line with this, the German government advocated for either a separate law or a new chapter within the AI Act that would specifically lay down rules for AI systems used by public authorities. It argued that this was necessary “in order to adequately meet the specific needs of these public authorities and fundamental rights requirements of sovereign actions” (“Stellungnahme der Bundesregierung”, 16.03.2022) – which, at first glance, was a plausible and convincing motive.
But as Tagesspiegel Background reported, Germany had around the same time called for relaxing requirements in the areas of law enforcement, migration, asylum and border control. These are areas where standards should be particularly high because people affected have less means to defend themselves and are often already discriminated against. State actions can have massive consequences on their freedom, autonomy, and well-being. Soon after, the compromise text of the Council from October 19, 2022 already contained exceptions from the transparency regime for these domains. At this point, all evidence points to the conclusion that Germany’s call for a separate regulation of public authorities’ uses of AI isn't directed towards increasing the protection of fundamental rights anymore.
Remote biometric identification
This development in the AI Act negotiations mirrors what happened with Germany’s stance on systems for remote biometric identification (RBI). Initially, Germany was one of the few member states that criticized the lax provisions on RBI in the draft AI Act as proposed by the Commission. In its coalition treaty, the new coalition government initially stressed that “biometric identification through AI in public spaces must be ruled out by an EU legislation.”
But after months of negotiations, the draft general approach’s provisions on RBI in publicly accessible spaces still include many dangerous loopholes and limitations – so many, in fact, that the draft now rather enables the use of RBI instead of prohibiting it. As it stands, only ‘real-time’ RBI should be forbidden, and only law enforcement authorities or their subcontractors would be subject to these limitations. This means that private entities as well as public authorities – except law enforcement – are not covered by the ban, and the ban does not apply to RBI if not used in real-time but on stored data. What is more, even for law enforcement, a number of exceptions would apply – for example, when RBI is used to prevent threats to health, physical safety and terrorism or to identify alleged perpetrators of certain types of offences. Lastly, according to the Council, all the AI Act’s provisions, including the ban on RBI, should not apply in situations where member states allude to their ‘national security’.
This draft general approach clearly stands at odds with what the German government laid down in its coalition treaty. Germany missed the opportunity to send a strong signal to Brussels to strengthen fundamental rights protection in the AI Act – a signal that would have been critical, given the lax positions of many other member states that have a tradition of prioritizing economic objectives over reliable safeguards for people’s rights.
Who’s pulling the strings?
On the German national level, two ministries are in charge of negotiating the AI Act: the Federal Ministry of Justice and the Federal Ministry for Economic Affairs and Climate Action. Historically, though, whenever matters of national security and law enforcement are concerned, the Federal Ministry of the Interior and Community has made certain that its position plays a decisive role in the law-making process. It can be said to have influenced the EU into creating its own regulation on data protection in the field of law enforcement, a separate law from the well-known General Data Protection Regulation (GDPR).
The ministry has a track record of hard-line positions on expanding the powers of law enforcement. It tested real-time remote biometric identification systems in one of Germany’s major train stations (an experiment that failed spectacularly) and even now that the ministry is headed by Social Democrat Nancy Faeser (having previously been run by the staunchly conservative Bavarian Horst Seehofer), nothing has changed. Currently, Faeser is openly pushing for a new law on data retention, even after the European Court of Justice ruled this to be out-of-line with EU law and with coalition partners wanting to pursue a different approach.
Sources in government and parliament have confirmed to AlgorithmWatch that both demands – to water-down restrictions on remote biometric identification and create more exemptions for AI-uses in law enforcement, migration, asylum and border control in the AI Act – originate from the interior ministry. It seems high time that Nancy Faeser is reminded of the promises the parties made in their coalition agreement – not only by her own party, the Social Democrats, but first and foremost by Marco Buschmann, the Liberal party justice minister, and the minister for economic affairs and climate action, Robert Habeck of the Green party.