Artificial Intelligence (AI) systems are increasingly used by European law enforcement and criminal justice authorities to profile people and areas, predict supposed future criminal behaviour or occurrence of crime, and assess the alleged ‘risk’ of offending or criminality in the future.
However, these systems have been demonstrated to disproportionately target and discriminate against marginalised groups, infringe on liberty, fair trial rights and the presumption of innocence, and reinforce structural discrimination and power imbalances.
Fundamental harms are already being caused by predictive, profiling and risk assessment AI systems in the EU, including;
Discrimination, surveillance and over-policing
Predictive AI systems reproduce and reinforce discrimination on grounds including but not limited to: racial and ethnic origin, socio-economic and work status, disability, migration status and nationality. The data used to create, train and operate AI systems is often reflective of historical, systemic, institutional and societal discrimination which result in racialised people, communities and geographic areas being over-policed and disproportionately surveilled, questioned, detained and imprisoned.
Infringement of the right to liberty, the right to a fair trial and the presumption of innocence
Individuals, groups and locations are targeted and profiled as criminal. This results in serious criminal justice and civil outcomes and punishments, including deprivations of liberty, before they have carried out the alleged act. The outputs of these systems are not reliable evidence of actual or prospective criminal activity and should never be used as justification for any law enforcement action.
Lack of transparency, accountability and the right to an effective remedy
AI systems used by law enforcement often have technological or commercial barriers that prevent effective and meaningful scrutiny, transparency, and accountability. It is crucial that individuals affected by these systems’ decisions are aware of their use and have clear and effective routes to challenge it.
The signatories call for a full prohibition of predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act. Such systems amount to an unacceptable risk and therefore must be included as a ‘prohibited AI practice’ in Article 5 of the AIA.
Fair Trials (International)
European Digital Rights (EDRi) (Europe)
Access Now (International)
Big Brother Watch (UK)
Bits of Freedom (Netherlands)
Bulgarian Helsinki Committee
Centre for European Constitutional Law – Themistokles and Dimitris Tsatsos Foundation.
Citizen D / Državljan D (Slovenia)
Civil Rights Defenders (Sweden)
Controle Alt Delete (Netherlands)
Council of Bars and Law Societies of Europe (CCBE)
De Moeder is de Sleutel (Netherlands)
Digital Fems (Spain)
Electronic Frontier Norway
European Centre for Not-for-Profit Law (ECNL)
European Criminal Bar Association (ECBA)
European Disability Forum (EDF)
European Network Against Racism (ENAR)
European Sex Workers Alliance (ESWA)
Equinox Initiative for Racial Justice (Europe)
Equipo de Implementación España Decenio Internacional Personas Afrodescendientes
Eticas Foundation (Europe)
Helsinki Foundation for Human Rights (Poland)
Greek Helsinki Monitor'Homo Digitalis (Greece)
Human Rights Watch (International)
International Committee of Jurists
Irish Council for Civil Liberties
Iuridicum Remedium (IuRe) (Czech Republic)
Ligue des Droits Humains (Belgium)
Panoptykon Foundation (Poland)
Refugee Law Lab (Canada)
Rights International Spain
ZARA – Zivilcourage und Anti-Rassismus-Arbeit (Austria)