Our message to the German government and public was clear: If the AI Act is to achieve its objective of protecting fundamental rights against harms stemming from AI applications, the shortcomings of the regulation must be corrected. Watch Angela Müller's statement in German (starting at 34:26):
First, it is essential that systems which can have a large-scale adverse impact on our human rights are captured by the regulation – for that, the definition of AI must be kept broad and there should not be any blanket exemption from the scope when it comes to the development and uses of AI systems for national security purposes. Second, since the obligations set in the proposal predominantly target the providers of AI systems, transparency and accountability mechanisms for deployers (users in the terminology of the AI Act) of AI systems must be strengthened. We need transparency about the context in which AI systems are being deployed and of the impact of such deployment on our fundamental rights before they are deployed (so-called fundamental human rights impact assessment) to prevent discrimination and injustice. Lastly, people must be more empowered when they are affected by AI systems than what the draft AI Act currently offers. At a minimum, people should be informed when they are affected by a high-risk AI system, have the right not to be subject to an AI system that is non-compliant with or prohibited under the AI Act, have the right to an explanation how the system made a decision on them, and ultimately individuals should have the right to challenge such decision via legal remedy.
Read the full statement in German in the PDF document below.