RisCanvi (“risk change” in Catalan) does not decide whether or not an inmate will be paroled. The final decision is made by professionals of the justice system, who do not necessarily have to agree with the result of the algorithm’s analysis. Nevertheless, it seems that in the great majority of the cases, the evaluation of the professionals matches the algorithm’s.
750 persons use RisCanvi daily
RisCanvi is not a single computer program. It is a comprehensive protocol. Inmates are interviewed every six months (more often if the prison administration sees a reason for it) by professionals who analyze their progress through the lens of preset risk factors. They then enter their results into a computer program which outputs an estimated level of risk (high, medium or low) using a deterministic statistical model.
Jordi Camps, head of the rehabilitation service of the Catalonian department for victim care, told AlgorithmWatch that 750 people worked with the tool on a daily basis, such as psychologists, legal criminologists, social workers and social educators. Among them, one hundred people act as ‘validators’. They decide whether the data input in RisCanvi by the other professionals matches the level of risk output by the algorithm.
“The result [of the algorithm] guides the decision-making, but does not condition it. The fact that it gives a low risk to a person does not imply that they will be leaving the prison, and a high risk does not prevent the inmate from going out. It only provides information so that a more qualified decision is taken”, Mr Camps explained.
However, according to Mr Camps, “in 95% of cases, if not more, the result of the algorithm matches the one decided by the validators”.
Two types of scales
RisCanvi is used to decide whether inmates can go on prison furlough (when they are allowed to leave the prison for a short period), but also to make choices about moving inmates from one prison to another or about parole (early release). RisCanvi analyzes five different outcomes: violent recidivism, self-inflicted harm, violence with other inmates or inside the facilities, breaking of furlough and recidivism in general.
There are two versions of RisCanvi: the screening (RisCanvi-S) and the complete (RisCanvi-C). The screening procedure is made of 10 items and is applied to all prisoners, according to the Catalonian Department of Justice. The complete procedure has 43 items and is only applied when the screening gives a high risk or when the prisons' personnel deems it necessary.
The complete procedure is applied to all violent offenders. “We detected that when analyzing violent crimes, the screening would always recommend to apply the complete [because it gave a moderate or high risk result] so we concluded that in these cases we would directly apply the complete scale”, said Marian Martínez, a psychologist working for the prison department.
Apart from the items shown in the table above, RisCanvi has four fixed variables that are taken into account for each analysis, no matter what version is used: the inmate’s age, gender, nationality (whether Spanish or foreign) and type of detention (convicted or pending trial). Ms Martínez said that there are different cut-off points depending on the gender, nationality and age of the inmate.
No machine learning
The algorithm was created by a multidisciplinary group of academics and Catalonian prisons’ workers exclusively for the use of the administration. No private company is involved. One of the main creators, psychology professor Antonio Andrés Pueyo, told AlgorithmWatch that the primary goal of RisCanvi was to evaluate inmates incarcerated for violent crimes. However, given that there was a larger population of prisoners charged with less serious crimes, Mr Pueyo and his team ended up adjusting the protocol to assess all inmates. That is essentially why the screening version was created.
“The complete is very costly because it has 43 items to analyze”, he said. At the beginning, they pondered a scale of around a hundred factors, similar to OASys, a system in the United Kingdom, but settled for a simpler version (OASys now has a version of 13 risk factors).
The Catalonian prison administration thought about updating RisCanvi with Artificial Intelligence but decided against it. “Five years ago we made a petition to observe the outcome of introducing machine learning techniques and whether this would substantially improve the results (...). The cost-benefits analysis did not show a significant improvement”, Mr Camps said.
An official report on recidivism rate from 2014 provides the sole data source on RisCanvi’s performance. It shows that the algorithm has a sensitivity of 77%, meaning that out of 100 inmates considered medium or high risk, 77 did reoffend. The specificity was 57%, meaning that out of 100 inmates who did not reoffend, 57 were labeled low-risk.
In a study conducted after these results were published, an external investigator criticized the way the results were framed. The same data shows that over four in five predictions of reoffense are wrong, a whopping 82% false-positive rate. “Nevertheless, the predictive value of RisCanvi in this case does not differ in excess from other popular risk assessment instruments used worldwide”, the author of the study added.
Since the 2014 report was published, RisCanvi was updated to take into account general recidivism for non-violent criminals and, according to Ms Martínez, to adjust the cut-off points and reduce false positives. RisCanvi is now in its third version, but there is no publicly available data to evaluate its efficacy. A new report is in the works but no release date has been set.
Jordi Camps, from the Catalonian Department of Justice, confirmed that no external audit had been done on RisCanvi for the past eleven years, either before or after the updates.
In a somewhat convoluted argument, Mr Camps said that the programme was created to “assess and manage individual risk”, and therefore was not being used to study data aggregated at the level of specific segments of the inmate population. This, in turn, means that there are no statistics regarding the performance of the algorithm for groups like women or foreigners.
Carlos Castillo, a researcher in the social computing group of University Pompeu Fabra of Barcelona, told AlgorithmWatch that data aggregated at the level of a specific group could be provided to investigators if asked for, though any request should be motivated and take into account the resources required to retrieve the data. He has investigated RisCanvi’s parameters in the past few years and concludes that the tool “is as good or as bad as those used internationally for the same purpose”.
“At a technological level, we have not seen anything that makes us suspect the algorithm. I do not find it to be inaccurate or biased. What I do think is that the procedure should have been another: an Algorithmic Impact Assessment (AIA) was never done (...). The draft European regulation on AI marks these systems as of ‘high risk’ and they should undergo a tougher process than RisCanvi did”, he added.
Mr Pueyo, who took part in the creation of RisCanvi, said that much of the tool’s design is based on the Canadian protocol “Level of Service Inventory-Revised”. It is not based on COMPAS, a controversial predictive algorithm used in the United States which has been shown to discriminate against Black inmates (these results are disputed). “The predictive capacity of RisCanvi, independent of biases, is similar to other systems that are being used,” he said.
He said that the professional team takes some factors into account to avoid possible bias. For example, he added, prison personnel will not assess the risk level of an Afghan inmate equally if he is illiterate or an industrial engineer, no matter what RisCanvi outputs.
Mr Castillo, the researcher, rejects the idea that RisCanvi is similar to COMPAS because “it’s not a proprietary system, it has been created by and for the administration”.
It would be very difficult for an inmate to appeal the output of RisCanvi. Judges consider that the prison administration, and not the algorithm, makes the actual decision.
“As this [the output of the algorithm] is taken into account by the administration but does not compel to anything, it doesn’t matter that there is no regulation normalizing this situation (...) and art. 22 of the GDPR [which prohibits automated decision-making in such situations] has a big hole for these cases”, said Alejandro Huergo, a Law professor who investigates algorithmic regulation in Spain.
Hero photo of the model prison of Barcelona (closed in 2017).
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!