As of February 2025: Harmful AI applications prohibited in the EU

Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

mixmagic from Getty Images Pro via Canva
Nikolett Aszódi
Policy & Advocacy Manager

AI systems can lead to flawed, erroneous, and even biased outcomes. Error-prone and often discriminatory face recognition systems have already led to the imprisonment of people who were mistakenly identified as criminal offenders. The Dutch child care benefit scandal has demonstrated that innocent people can be flagged as fraudsters by a biased algorithm, which left many in despair and financial ruin. In such cases, the impact on people’s lives is detrimental. This may even be a large-scale impact as the systems will perpetuate the patterns of bias towards certain groups of people.

The EU’s AI rulebook – the Artificial Intelligence (AI) Act – is the first comprehensive legislation which laid down red lines for certain AI practices. This law prohibits AI systems that it considers to pose unacceptable risks to people’s safety, health, and fundamental rights. This protection prevails also if the system is operated outside the EU but produces an output in the EU. The prohibited AI practices are listed in Article 5 of the regulation.

Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 35,000,000 or, for enterprises, 7 % of their total annual turnover.

Which AI practices are now (partially) prohibited?

Prohibited are:

Partially banned:

Retrospective RBI is not banned at all by the act. The suspicion of any crime having taken place is enough to justify its deployment. 

Loopholes

EU member states’ security hardliners have successfully lobbied to exempt the AI Act’s requirements when it comes to "national security." This means that the regulation’s safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, regardless of whether this is done by a public authority or a private company. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case in line with the EU Charter of Fundamental Rights, and is thus not to be misunderstood as a blanket exemption.

The AI Act prohibitions only apply to systems that are placed on the EU market or used in the EU. This leaves a dangerous loophole: Banned systems that are deemed by the EU regulators to be “incompatible” with human rights and EU values can still be exported to third countries.

Read more on our policy & advocacy work on the Artificial Intelligence Act