Explainer: Predictive Policing

Algorithmic Policing: When Predicting Means Presuming Guilty

Algorithmic policing refers to practices with which it is allegedly possible to “predict” future crimes and detect future perpetrators by using algorithms and historical crime data. We explain why such practices are often discriminatory, do not hold up to what they promise, and lack a legal justification.

If you would like to learn more about this topic, please get in touch with:

Kilian Vieth-Ditlmann
Team Lead Policy & Advocacy

The term “predictive policing” is not defined in law. It is suggestive as it implies that certain technologies are indeed capable of predicting crimes. Generally speaking, it refers to AI systems and algorithms that are designed to indicate areas, places, and times in which certain types of crimes are likely to occur. Sometimes, they are even used to predict who is likely to commit a criminal act.

Person-based predictive policing utilizes big data capabilities to develop predictive profiles of individuals based on past criminal activity, current associations, and other factors that correlate with criminal propensity.

The Berlin kbO system and similar location-based applications used by police in other German cities “predict” entire neighborhoods to be more affected by crime. The systems’ assumptions stem from historical crime data and other statistical data. The algorithmically generated “crime hotspots” tend to be places where minority groups live and work. Due to the police designations, all inhabitants are at a particularly high risk of being targeted. The police are allowed to conduct “suspicionless” stops and identity checks, which increases the likelihood of criminalization of people in those areas.

Suspicions turning into more crime

Such recorded checks and incidents also feed back into crime data, distort them (as they are not grounded in a reasonable suspicion), and can also influence further predictive analyses, leading to a feedback loop where the same areas and profiles are repeatedly targeted. This phenomenon is known in criminology as the Lüchow-Dannenberg syndrome: An increase in police presence in a location leads to an increase in statistically recorded offenses and crimes. In short, it seems as if more police presence causes more crime. Presumably, more crimes in other areas have to go unnoticed consequentially, as police’s resources are more reinforced in the algorithmically indicated areas.

Analytics and profiling systems used by the police can link suspects to people who only came into contact with them by chance, which extends the net of criminal suspicion widely, and criminalizes people by association − even people who are not under suspicion of being involved in crimes.

Due to the lack of publicly accessible evaluations and details on the inputs and functioning of these algorithmic and automated data analysis systems, the full impact of these systems’ bias cannot be comprehensively analyzed. Nevertheless, there are numerous examples of how such systems directly and indirectly lead to racial profiling, racism, and other forms of discrimination, particularly against Muslims and people perceived as migrants.

The systems in place can create a false suspicion and lead to surveillance, observation, police visits at workplaces or at home, questioning, ongoing asylum proceedings halted, or even arrests, deportation, and “preventive” detention (and have been doing so). All of which can occur without any objective evidence of criminal wrongdoing − just data-based, algorithmically generated suspicion.

Algorithmic policing also targets travelers: The Passenger Name Record (PNR) information system applies to flights arriving from third countries to the EU countries. EU countries can decide to apply them to flights departing from or arriving at another EU country. The EU PNR directive allows for storing and processing all data of all flight passengers without evidence of any prior criminal act or suspicion for six months. Such passenger data includes all information that needs to be submitted when booking a flight, for example the duration and destination, the credit card number, contact details, dietary requests, and remarks on specific needs such as assistance due to handicaps. Algorithms that stem from police assumptions search for supposedly conspicuous patterns in the passenger name records. It is not publicly known what patterns the police consider suspicious. People marked as suspicious might be put under observation or be arrested, their residency status might be investigated, or their entry into a country refused.

Inaccuracies and no crime reduction

There is a complete lack of evidence that “predictive” systems even reduce the crime they are created and used to target.

Some findings on the rate of false positives and false negatives are alarming. When Federal Criminal Police Office employees manually checked the automatically generated PNR “hits” between 29 August 2018 and 31 March 2019, only 277 of around 94,000 technical matches were correct, which is an accuracy rate of just 0.3 per cent.

A 2023 analysis revealed that less than 0.5% of the 23,631 crime predictions generated by Geolitica (formerly PredPol) for Plainfield, New Jersey, accurately matched reported crimes, which questions the AI’s effectiveness.

Palantir’s spread and Germany’s Constitutional Courts’ ruling

In 2016, the federal and state governments in Germany agreed to modernize the police IT architecture with the “Police 2020” program. The project has been renamed “Police 20/20” or “P 20” (after the 20 police forces involved at state and federal level). The different police systems, applications, and processes are now to be gradually merged by 2030.

Software from the controversial US company Palantir was made available nationwide via “Police 20/20” to make it easier for police forces to quickly search and analyze large amounts of data. The systems allow for accessing and merging data from multiple police databases, such as an individual’s history of being stopped, questioned, or searched by police, data on “foreigners,” or data from external sources such as social media, mobile devices, or cell tower as well as call log data. The systems also allow for profiling and targeting people for whom there is no evidence of involvement in crimes, people who are not suspected of crime, and even victims and witnesses.

The Palantir data profiling systems have been tested or even used without a clear – or indeed, any – legal basis. Police forces and data protection commissioners often disagree about the scope and the lawfulness of the systems. Legal experts have warned that Palantir software merges different data pools that were collected for completely different purposes to create complex and detailed personal profiles, which can lead to illusory correlations. Data of people who were not suspected to be involved in crime, such as witnesses, was included by German federal states such as Hesse. People can be arbitrarily targeted by the police, and as they do not know out about their data being analyzed, they are deprived of the right to defend themselves.

In February 2023, the German Federal Constitutional Court ruled that the use of Palantir platforms in Hesse and Hamburg was unconstitutional as it violated the right to informational self-determination. In Hesse, the Police Act had to be clarified following this ruling. The use of certain data sources by the Palantir software hessenDATA and some areas of application were subsequently restricted. The State Commissioner for Data Protection in North Rhine-Westphalia considers it generally necessary to amend the current law, limit the scope of application, and improve the procedural precautions before introducing Palantir systems. In the opinion of the Commissioner for Data Protection in Bavaria, a legal basis for the current VeRA test with person-related data in Bavaria has also been lacking. Bavaria still tested VeRA and also concluded a framework agreement with Palantir that enables other federal states to purchase Palantir software without a new, lengthy tendering process. Two federal states, Hesse and North Rhine-Westphalia, carry on using Palantir software (hessenDATA/DAR).

Down by law

In policing, the discriminatory police practices of racial and ethnic profiling, resulting in unjustified stops, identity checks, and searches based on racialized characteristics such as skin color or (presumed) religious affiliation, have long been criticized by victims, victims' associations, academics, human rights organizations, and international organizations.

However, discrimination by police, law enforcement, and data-based systems is still rarely recognized as a structural or institutional problem. The European Commission against Racism and Intolerance (ECRI), set up by the Council of Europe, drew the attention to racial profiling in police work back in 2007. However, the ECRI recommendation to commission a nationwide study in Germany on racial profiling to collect data, develop measures against and prevent future racial profiling has not yet been implemented. In 2020, the Minister of the Interior Horst Seehofer stopped a planned, nationwide study on racial profiling and racism in the police force.

According to the 2023 report “Being Black in the EU” by the European Union Agency for Fundamental Rights, almost half of people of African descent in 13 EU countries are confronted with racism and discrimination in their everyday lives. Racist harassment and racial profiling by the police are widespread, Germany fares particularly badly. 33 per cent of the respondents in Germany have experienced arbitrary police checks in the five years before the survey.

An example from the US also shows what can go wrong if autonomous systems are deployed prematurely without considering the broader ethical, legal, and social issues. The COMPAS algorithm, intended to predict recidivism, showed inherent bias against black individuals, wrongly flagging them as future criminals at a higher rate compared to white defendants.

If racial profiling leads to contacts with the police and the resulting data is fed into an AI system for predictive policing, the systems’ crime predictions will be as racist as the policing was in the first place.

European law to regulate AI-supported law enforcement

The use of AI systems in law enforcement is to some degree regulated by the Law Enforcement Directive (LED). Under the LED Article 11, processing of personal data forprofiling requires human assessment as a necessary component, as it is forbidden to make decisions exclusively based on automated processing of personal data.

The EU’s AI Act recognizes that law enforcement’s use of certain AI systems often involves a significant risk, potentially leading to surveillance, arrests, and other fundamental rights infringements. If opaque, unexplainable, or undocumented, these systems may also hinder procedural safeguards like the right to fair trial, effective remedy, and presumption of innocence.

The AI Act Article 5(1)(d)prohibits AI systems for predictive policing based solely onthe profiling of a natural person or on assessing their personality traits and characteristics (as opposed to predictions about geographic areas). But it allows the use of AI-based predictive policing if it supports the assessment of humans, such as police officers, prosecutors, or investigators. Many of the not banned predictive policing systems are classified as so-called high-risk AI systems under Article 6(2). Annex III lists use cases that would qualify an AI system as “high-risk” according to the AI Act.

High-risk AI systems notably include those used for individual risk assessments, emotion recognition, evidence reliability evaluation, criminal offense prediction, and personality or behavior profiling. In particular, the AI Act designates the following uses of AI by law enforcement bodies as high risk:

Providers of high-risk AI system must fulfill a range of requirements in order to be able to sell their AI systems to law enforcement bodies. The deployers of such systems, for example police agencies, border authorities, or prisons, also have to meet some preconditions, such as conducting a fundamental rights impact assessment before they put the high-risk system into use.

Read more on our policy & advocacy work on biometric recognition.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.