Swiss police automated crime predictions but has little to show for it

A review of 3 automated systems in use by the Swiss police and judiciary reveals serious issues. Real-world effects are impossible to assess due to a lack of transparency.

This story is part of AlgorithmWatch's upcoming report Automating Society 2020, to be published later this year. Subscribe to our newsletter to be alerted when the report is out.

The Swiss police and justice authorities use, by one count, over 20 different automated systems to estimate or predict inappropriate behavior. Police and justice are largely regional competencies in Switzerland; each Canton might have its own systems in place.

Based on a series of reports by the Swiss public service broadcaster in 2018, we reviewed three of them.

Predicting burglaries

Precobs has been used in Switzerland since 2013. The tool is sold by a German company that makes no mystery of its parentage with “Minority Report”, a science-fiction story where “precogs” predict some crimes before they occur. (The plot revolves around the frequent failures of the precogs and their subsequent cover up by the police.)

It tries to predicts burglaries from past data, based on the assumption that burglars often operate in small areas. If a cluster of burglaries is detected in a neighborhood, the police should patrol that neighborhood more often to put an end to it, the theory goes.

Three cantons use Precobs: Zürich, Aargau and Basel-Land, totaling almost a third of the Swiss population. Burglaries fell dramatically since the mid-2010’s. The Aargau police even complained in April 2020 that there were now too few burglaries for Precobs to use.

But burglaries fell in every Swiss cantons, and the three that use Precobs are nowhere near the best performers. Between the years 2012-2014 (when burglaries were at their peak) and the years 2017-2019 (when Precobs was in use in the three cantons), the number of burglaries decreased in all cantons, not just in the three that used the software. The decrease in Zürich and Aargau was less than the national average of -44%, making it unlikely that Precobs had much of a strong effect on burglaries.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

A 2019 report by the University of Hamburg, could not find any evidence of the efficacy of predictive policing solutions, including Precobs. No public document could be found that mentioned the cost of the system for Swiss authorities, but the city of Munich paid 100,000€ for the installation of Precobs – operating costs not included.

Predicting violence against women

Six cantons (Glarus, Luzern, Schaffhausen, Solothurn, Thurgau and Zürich) use the Dyrias-Intimpartner system to predict the likelihood that a person will assault their intimate partner. Dyrias stands for “dynamic system for the analysis of risk” and is also built and sold by a German company.

According to a 2018 report by Swiss public-service broadcaster SRF, Dyrias requires police personnel to answer 39 “yes” or “no” questions about a suspect. The tool then outputs a score on a scale from 1 to 5, from harmless to dangerous. While the total number of persons tested by the tool is unknown, a tally by SRF showed that 3,000 individuals were labeled “dangerous” in 2018 (but the label might not be derived from using Dyrias).

The vendor of Dyrias claims that the software correctly identifies 8 out of 10 potentially dangerous individuals. However, another study looked at the false positives, individuals labeled dangerous who were in fact harmless, and found that 6 out of 10 people flagged by the software should have been labeled harmless. In other words, Dyrias can boast good results only because it takes no risks and gives the “dangerous” label liberally. (The company behind Dyrias disputes the results.)

Even if the tool’s performance were improved, its effects would still be impossible to assess. Justyna Gospodinov, the co-director of BIF-Frauenberatung, an organization that supports victims of domestic violence, told AlgorithmWatch that, while cooperation with the police was improving and that the systematic assessment of risk was a good thing, she could not say anything about Dyrias. When they take in a new case, they do not know whether the software was used or not, she said.

Predicting recidivism

Since 2018, all justice authorities in German-speaking cantons use ROS (an acronym for “Risikoorientierter Sanktionenvollzug” or risk-oriented execution of prison sentences). The tool labels prisoners A when they have no risk of recidivism, B when they could commit a new offense or C when they could commit a violent crime. Prisoners can be tested several times but, upon subsequent tests, can only move from category A to B or C, not the other way around.

A report by SRF revealed that only a quarter of the prisoners in category C did commit a new crime upon being released (a false-positive rate of 75%), and that only one in five of those who did commit a new crime were in category C (a false-negative rate of 80%), based on a 2013 study by the University of Zürich. A new version of the tool was released in 2017 but has yet to be reviewed.

The French and Italian-speaking cantons are working on an alternative to ROS, which should be deployed in 2022. While it keeps the 3 categories, their tool will only work in conjunction with interviews with the prisoner being rated.

Mission: Impossible

Social scientists can have great successes predicting general outcomes. In 2010, the Swiss statistics office predicted that the resident population of Switzerland would reach 8.5 million by 2020 (actual 2020 population: 8.6 million). But no scientist would try to predict the date of death of a given individual: Life is simply too complicated.

In this regard, demography is no different from criminology. Despite claims to the contrary by commercial vendors, predicting individual behavior is likely to be impossible. In 2017, a group of scientists tried to settle the issue. They asked 160 teams of researchers to predict the school performance, the likelihood of being evicted from their home and four other outcomes for thousands of teenagers, based on precise data collected since their birth. Thousands of data points were available for each child. The results, published in April 2020, are humbling. Not only could not a single team predict an outcome with any accuracy, the ones who used Artificial Intelligence performed no better than teams who used only a few variables with basic statistical models.

Moritz Büchi, a senior researcher at the University of Zürich, is the only Swiss scholar who took part in this experiment. In an email to AlgorithmWatch, he wrote that while crime was not part of the outcomes under scrutiny, the insights gained from the experiment probably apply to predictions of criminality. This does not mean that predictions should not be attempted, Mr Büchi wrote. But turning simulations into ready-to-use tools gives them a “cloak of objectivity” which can discourage critical thinking, with potentially devastating consequences for the people whose future is predicted.

Precobs, which does not attempt to predict the behavior of specific individuals, does not fall in the same category, he added. More policing could have a deterrent effect on criminals. However, hotspot detection relies on historical data. This might lead to the over-policing of communities where crime was reported in the past, in a self-reinforcing feedback loop.

Chilling effects

Despite their patchy track record, and evidence of the near-impossibility to predict individual outcomes, Swiss law enforcement authorities keep using tools that claim to do just that. Their popularity is due in part to their opacity. Very little public information exists on Precobs, Dyrias and ROS. The people impacted, who are overwhelmingly poor, rarely have the financial resources needed to question automated systems, as their lawyers usually focus on verifying the basic facts alleged by the prosecution.

Timo Grossenbacher, the journalist who investigated ROS and Dyrias for SRF in 2018, told AlgorithmWatch that finding people affected by these systems was “almost impossible”. Not for lack of cases: ROS alone is used on thousands of inmates each year. Instead, their opacity prevents watchdogs from shedding lights on predictive policing.

Without more transparency, these systems could have a “chilling effect” on the Swiss society, according to Mr Büchi of the University of Zürich. “These systems could deter people from exercising their rights and could lead them to modify their behaviors,” he wrote. “This is a form of anticipatory obedience. Being aware of the possibility of getting (unjustly) caught by these algorithms, people may tend to increase conformity with perceived societal norms. Self-expression and alternative lifestyles could be suppressed.”

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.