Joint Statement

Upcoming Commission Guidelines on the AI Act Implementation: Human Rights and Justice Must Be at Their Heart

The Artificial Intelligence Act establishes rules for the development and use of AI concerning the EU. Now that the law is being implemented, civil society calls on the EU Commission to put human rights and justice at the forefront when interpreting the law.

Position

16 January 2025

#aiact #eu

Nikolett Aszódi
Policy & Advocacy Manager

The EU Artificial Intelligence Act (AI Act) is the EU’s major AI rulebook. It sets rules related to transparency, risk-mitigation, and oversight for the development and use of AI in both the public and the private sector. It officially entered into force on 1 August 2024, which set off the timeline for the different provisions to be applicable. The rules to be applicable as of 2 February 2025 include the list of prohibited AI practices such as remote face recognition in public spaces by law enforcement (with certain exceptions). 

The European Commission has the overall responsibility for the effective implementation of the AI Act. One of the responsibilities includes providing guidelines (Article 96 of the AI Act) that specify the practical implementation of the law. 

Among others, the Commission is responsible for developing guidelines on a) the list of prohibited AI practices (Article 5) and b) the definition of an “AI system” (Article 3). In December 2024 the Commission launched a stakeholder consultation on the guidelines related to these two articles of the regulation. While we welcome that the Commission officially sought input from stakeholders, civil society calls on the Commission to ensure that the guidelines will place fundamental rights and justice at its forefront. The guidelines provide help with the interpretation of the regulation as they add more concrete, specific measures to specific provisions that need further clarification. 

Read our full joint statement with civil society partners here, see our main demands below.

1. Clarify that comparatively ‘simple’ systems are explicitly within scope of the AI system definition: Such systems should not be considered out of scope of the AI Act just because they use less complicated algorithms. We are concerned that developers might leverage the definition of AI and the classification of high-risk AI systems to bypass the obligations of the AI Act. For instance, transforming an AI system into a rule-based system could circumvent the regulations of the AI Act, while maintaining the same functionality and carrying the same risks. Hence, regulation must focus on potential harm, not just technical methods. The Dutch SyRI scandal is a clear example of a system that appeared simple and explainable, but which had devastating consequences for people’s rights and lives, especially for racialised people and those with a migrant background.

2. Prohibitions of systems posing an ‘unacceptable’ risk to fundamental rights are clarified to prevent the weaponisation of technology against marginalised groups and the unlawful use of mass biometric surveillance. Specifically, the guidelines should reflect:

  • The ban on social scoring must be clarified to include within its scope existing social scoring practices in Europe, especially in welfare and migration procedures. To this end, the guidelines must:  specify that  “social behaviour” is interpreted broadly, as several elements can be used as risk indicators, such as ‘unusual living arrangements’ in the Danish automated welfare case; clarify that “personal or personality” characteristics also include proxy data (e.g. postcode in the Dutch child welfare scandal case) related to race, ethnicity, disability, socio-economic status, and other grounds; limit what can be considered data related to a social context to counter excessive and unlawful data collection, as well as sharing, and merging of datasets, which often exceeds the information necessary for assessing eligibility for benefits or individual risk. Finally, the guidelines must acknowledge the wide-ranging contexts — including in employment, education, welfare, policing, and migration — where social scoring practices are prevalent and should be prohibited.
  • Notwithstanding the limited reach of the ban on predictive policing — which excludes event and location-based predictions — the guidelines must clarify that predicting ‘risk of committing a criminal offence’ includes all systems that purport to predict a wide range of behaviours that are criminalised and have criminal law and administrative consequences. As such, the guidelines should specify that systems making predictions about the likelihood of being registered in a police system (as was the case in the Dutch Prokid system) are within the scope of the prohibition, as well as predictive systems used in the migration control, if being irregular or being classified as presenting a risk to public security qualifies as criminal activity. In these cases, such systems must be covered by the ban as they amount to criminal risk assessments, and systems such as risk assessments included in ETIAS shall also be banned.  
  • The current ban on non targeted scraping of facial images leaves room for problematic loopholes. The guidelines must clarify that any derogation from the ban must be in line with the case law of the Court of Justice of the EU, and that any face scraped from the internet or CCTV footage must have a link to the commission of a crime. Otherwise, the facial images of innocent people could be scraped because they appear in the same CCTV footage as the commission of a crime. We further urge the Commission to prevent loopholes by deleting the proposed definition of a facial image database. This could create a loophole where systems like Clearview AI or PimEyes, which claim to store only biographical information or URLs and not the actual facial images, would fall outside of the prohibition.  
  • Civil society has highlighted the problematic legitimisation of so-called “emotion recognition” systems through the AI Act. These tools suffer from fundamental flaws in their scientific underpinnings, are highly intrusive, and could lead to serious life-threatening consequences for persons subjected to these tools, especially in contexts such as policing, migration, and health and safety applications. The guidelines must therefore clearly establish the difference between legitimate medical equipment (e.g. heart monitors) and systems that aim to infer or identify people’s emotions (e.g. “aggression detection”). The latter cannot be claimed as health and safety tools and be exempt from the prohibition.  
  • For the prohibition on biometric categorisation it is important that the guidelines clarify that the ban applies also when deductions are made on “ethnicity” and “gender identity” as they could lead to inferences respectively on “race” and “ sex life or sexual orientation”, which are in the scope of the ban. With regards to the exception, the guidelines should correct the wrong suggestion made in the consultation text, that labeling or filtering is permissible in the context of law enforcement among others, whereas the AI Act text is clear that this exception applies only in the law enforcement context.  

3. Concerning the interplay with other Union law, the guidelines must ensure that human rights law, in particular the EU Charter of Fundamental Rights, are the central guiding basis for the implementation and that all AI systems must be viewed within the wider context of discrimination, racism, and prejudice. For this reason, the guidelines must emphasise that the objective of the prohibitions is to serve a preventative purpose, and therefore must be interpreted broadly in the context of harm prevention.

Lastly, we note the shortcomings of the Commission’s consultation process: notably, the lack of advanced notice and a short time frame for submissions, no publication of the draft guidelines to enable more targeted and useful feedback, lack of accessible formats for feedback, strict character limits on complicated, and at times leading, questions which required elaborate answers — for example, Question 2 on the definition of AI systems only asked for examples of AI systems that should be excluded from the definition of AI, hence allowing to narrow and not widen the definition of AI. 

We urge the AI Office and the European Commission to ensure that all future consultations related to the AI Act implementation, both formal and informal, give a meaningful voice to civil society and impacted communities and that our views are reflected in policy developments and implementation.

As civil society organisations actively following the AI Act, we expect that the AI Office will ensure a rights-based enforcement of this legislation, and will prioritise human rights over the interests of the AI industry. 

Signed,

Individuals

Read more on our policy & advocacy work on the Artificial Intelligence Act