Our statement on the draft AI Act to the German government and public

On 26 September, the Digital Committee of the German Parliament held a hearing with the theme “EU regulation regarding Artificial Intelligence including the competitiveness on the field of AI and blockchain technology” with a major focus on the upcoming AI Act. Our Head of Policy & Advocacy, Angela Müller, was invited to give a statement on behalf of AlgorithmWatch.
Deutscher Bundestag | Screenshot Live-Stream

Our message to the German government and public was clear: If the AI Act is to achieve its objective of protecting fundamental rights against harms stemming from AI applications, the shortcomings of the regulation must be corrected. Watch Angela Müller's statement in German (starting at 34:26):

External content from bundestag.de

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

First, it is essential that systems which can have a large-scale adverse impact on our human rights are captured by the regulation – for that, the definition of AI must be kept broad and there should not be any blanket exemption from the scope when it comes to the development and uses of AI systems for national security purposes. Second, since the obligations set in the proposal predominantly target the providers of AI systems, transparency and accountability mechanisms for deployers (users in the terminology of the AI Act) of AI systems must be strengthened. We need transparency about the context in which AI systems are being deployed and of the impact of such deployment on our fundamental rights before they are deployed (so-called fundamental human rights impact assessment) to prevent discrimination and injustice. Lastly, people must be more empowered when they are affected by AI systems than what the draft AI Act currently offers. At a minimum, people should be informed when they are affected by a high-risk AI system, have the right not to be subject to an AI system that is non-compliant with or prohibited under the AI Act, have the right to an explanation how the system made a decision on them, and ultimately individuals should have the right to challenge such decision via legal remedy.

Read the full statement in German in the PDF document below.

Read more on our policy & advocacy work on the Artificial Intelligence Act