
As of February 2025: Harmful AI applications prohibited in the EU
Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

AI systems can lead to flawed, erroneous, and even biased outcomes. Error-prone and often discriminatory face recognition systems have already led to the imprisonment of people who were mistakenly identified as criminal offenders. The Dutch child care benefit scandal has demonstrated that innocent people can be flagged as fraudsters by a biased algorithm, which left many in despair and financial ruin. In such cases, the impact on people’s lives is detrimental. This may even be a large-scale impact as the systems will perpetuate the patterns of bias towards certain groups of people.
The EU’s AI rulebook – the Artificial Intelligence (AI) Act – is the first comprehensive legislation which laid down red lines for certain AI practices. This law prohibits AI systems that it considers to pose unacceptable risks to people’s safety, health, and fundamental rights. This protection prevails also if the system is operated outside the EU but produces an output in the EU. The prohibited AI practices are listed in Article 5 of the regulation.
Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 35,000,000 or, for enterprises, 7 % of their total annual turnover.
Which AI practices are now (partially) prohibited?
Prohibited are:
- AI systems which are manipulative or use deceptive techniques to alter people’s behavior, such as voice-activated toys that encourage dangerous behavior in children,
- AI systems which exploit the vulnerabilities of people or groups,
- certain forms of social scoring, which are unrelated to the context in which the data they use was gathered,
- AI systems which create or expand face recognition databases through the untargeted scraping of face images from the internet or CCTV footage (such as Clearview AI or PimEyes),
- live face recognition in public spaces by the police (with exceptions).
Partially banned:
- is predictive policing if AI systems make risk assessments of individuals based on their personality traits to predict the risk of them committing a crime. Not banned is the use of predictive policing systems based on “objective and verifiable facts directly linked to a criminal activity.“
- are categorization systems that try to infer people’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation based on biometric data. This ban does not apply to law enforcement authorities.
- are emotion recognition systems in the context of education and at the workplace which try to infer the emotional status (e.g., tiredness, negative attitude to work, stress, anger, or disrespect) and characteristic traits based on biometric data such as facial expressions (except for medical and safety reasons).
Retrospective RBI is not banned at all by the act. The suspicion of any crime having taken place is enough to justify its deployment.
Loopholes
EU member states’ security hardliners have successfully lobbied to exempt the AI Act’s requirements when it comes to "national security." This means that the regulation’s safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, regardless of whether this is done by a public authority or a private company. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case in line with the EU Charter of Fundamental Rights, and is thus not to be misunderstood as a blanket exemption.
The AI Act prohibitions only apply to systems that are placed on the EU market or used in the EU. This leaves a dangerous loophole: Banned systems that are deemed by the EU regulators to be “incompatible” with human rights and EU values can still be exported to third countries.
Read more on our policy & advocacy work on the Artificial Intelligence Act