
The AI Act: First Steps Towards Lawful AI
The EU created a law to regulate Artificial Intelligence across all sectors. It is an important initiative to protect fundamental rights from risks stemming from AI – with major flaws.

What is the AI-Act?
The AI Act is one among other measures by the EU to govern the digital sector. It is designed as a product safety regulation since AI systems can impair fundamental rights: Car insurance companies discriminate against customers by automatically calculating premiums differently despite almost identical profiles; companies and state authorities restrict the right to privacy and the right to freedom of expression and assembly by deploying AI systems that monitor public spaces and classify people as dangerous based on their appearance; people are randomly put under suspicion by “predictive” policing tools only on the grounds that they accidentally had contact with suspects.
The AI Act’s approach
The AI Act was formally adopted in May 2024. In August 2024 the AI Act entered into force, the enforcement is to be started in August 2026. To bridge the transitional period before full implementation, the Commission has launched the AI Pact that is supposed to make AI developers voluntarily adopt key obligations of the AI Act ahead of the legal deadlines. The law is not applicable retroactively to systems that have already been in use before. The regulation will have an impact beyond the EU: AI providers who place AI systems on the EU market will be subject to the AI Act even if they are located outside the EU.
The AI Act sets rules for AI systems according to their risk level. Banned are systems that pose unacceptable risks, such as systems to purposefully manipulate people to their detriment. “High-risk systems” require a certain level of transparency and risk mitigation measures. This also applies to applications that can generate deepfakes or tools that categorize people based on biometric data.
You also want to contribute to the regulation of AI? Then join us as an oficial friend of AlgorithmWatch by becoming a supporting member - starting at just 5€ per month
Good efforts
People will need to be notified when they are directly interacting with AI systems such as chatbots (if it is not obvious), or if they are exposed to emotion recognition and biometric categorization systems that process their personal data − unless the systems are being used for the purpose of law enforcement. Deepfakes require a disclosure that they were artificially generated.
The AI Act requires the introduction of an EU-database for high-risk AI systems on the market to ensure that the public and public interest organizations are informed about their use context. (Systems used in law enforcement, migration, and border control, however, will not be disclosed to the public but listed in a non-public database.)
All deployers must ensure human oversight and stop the use of the system and inform the provider as well as authorities in case of a malfunction or harm.
If individuals have been affected significantly by a decision based on an AI system or even adversely impacted, they have the right to obtain an explanation about the role of the AI system in the decision-making process. Individuals also have the right to lodge a complaint with national authorities if they think that the regulation has been infringed upon.
In case of non-compliance with the Act, companies are to be fined in the millions. They can reduce the fine by cooperating with the authorities in limiting the harms that have been caused by their AI systems.

The AI Act is designed as a product safety regulation since AI systems can impair fundamental rights. We created an overview of risks and dangers coming with AI to explain why AI could threaten our rights in the first place.
AI Regulation in the Making
In the area of law enforcement, the regulation partially bans the use of AI systems in public spaces if they can identify people based on their biometric data – such as face recognition. There are some basic safeguards against biometric identification in real-time. The bar set by the AI Act for allowing post remote biometric identification in public spaces (when the identification happens sometime after the recording) on the other hand is very low. The suspicion of any crime can justify such a system be used.
Categorizing biometric data to deduce race, political believes, or sexual orientation is prohibited, while some forms of biometric categorization remain legal under the AI Act, gender categorization for example. Emotion recognition is banned in employment and education but allowed for law enforcement and migration authorities. None of the bans meaningfully apply to the migration context.
The law will not apply in the areas of military, defense, and national security. If governments invoke the national security exception, they can even use prohibited AI applications without any of the safeguards foreseen by the act.
Another weak spot of the law is that developers of AI systems are to decide whether their systems pose a high-risk or not – and as a result have the opportunity to circumvent most requirements set by the regulation.
The next years will be decisive for the enforcement of the AI Act, with different EU institutions, national lawmakers, and even company representatives setting standards, publishing interpretive guidelines, building oversight structures, and driving the Act’s implementation across the EU’s member countries.
Do you believe that human rights in the digital space must be constantly protected and strengthened? Then become a AlgorithmWatch supporting member.
With your support, we can step on the toes of politicians and companies, and confront them with evidence that they don't live up to their responsibility. We do so by conducting thorough research, campaigning for change, and broadening our expertise.

You can find more information about your supporting membership here.




