Spanish inmates not to be automatically monitored in fear of AI Act

Spanish region Catalonia’s government approved the use of an Artificial Intelligence-based software to monitor inmates and interpret their behavior. Partially funded by the European Union, the system was to be implemented at the Mas d'Enric prison near Tarragona, a city south of Barcelona, and extended to other regional prisons. Ultimately, it wasn't.

The halt to the system’s implementation came as a surprise even to government members. In January 2024, Catalan government’s Justice Advisor Gemma Ubasart announced in a plenary session that "the public administration could not be oblivious to the debate" around the new European regulation on Artificial Intelligence – the AI Act was ratified by the European Parliament in March – and that it was "more prudent not to continue with this project."

Eight months earlier, the justice department had launched a pilot project in the Mas d’Enric prison and allocated 200,000 euros to the French company Inetum that was supposed to supply the software. The illustrative timeline of the public tender stated that the training phase would start in October 2023. However, the project never reached that stage. 

The regional Department of Justice (DOJ) claims that the project was in a “very preliminary planning phase” and “had not yet been implemented.” In fact, there was still “a data protection impact analysis” pending when the administration began the contract termination process, as briefly explained in response to our freedom of information (FOI) request.  

The DOJ did not provide further details on the algorithmic models used, the databases to which they would have access, or the data that would have been used to create the risk profiles. The department justifies its refusal on the grounds that "the program has not yet been developed,” making it seem like the administration had only a rough idea of how it would work.

Behavior prediction and live monitoring

The technical specifications of the public contract describe a fully functional turnkey program. Ubasart explained that “the project was conceived as a proof of concept, a feasibility study prior to any pilot test.” However, the system went beyond a theoretical prototype. 

The contractual documents for this “technological solution based on Artificial Intelligence for the prediction and prevention of internal security incidents” list three use cases: predicting the risk of intra-institutional violence; anticipating the possibility of inmates introducing prohibited substances and items; and monitoring inmates with a pre-existing escape risk profile. Each of them is enabled by a suite of hardware and software solutions.

First, the program “automatically” searches for “data and clues related to the inmate” within existing Catalan penitentiary information systems – among them, the Criminal Enforcement Information System and the Youth Justice Information System – and “other institutional sources.” The contract does not mention which. The software then assigns a risk profile based on the assessment of inmates’ behavior, and classifies whether they are likely to be “aggressive” or not.

For the second use case, the contract mentions the installation of “facial recognition and video analysis cameras.” These should allow “real-time detection of non-verbal expressions and attitudes indicative of illicit behavior.” In other words, every movement and expression of inmates are analyzed live and considered likely suspicious. Each gesture fuels the algorithm’s prediction of the “risk of introduction of prohibited substances or objects.” Inmates are filmed especially after visits, including conjugal visits.

The public contract mentions the implementation of an entire automated system with access to data, leads, and information about the inmates, including biometrics. The system is required to send alerts to prison officers and generate incident reports.

Lack of scientific evidence

Much of the program relies on the recognition of emotions and behaviors to predict situations. But when asked about the scientific literature on which the developers rely to justify its effectiveness, the Generalitat remains silent. "We do not have this information," responded the Department of Justice to our FOI request. 

The very validity of emotions recognition remains a matter of debate. “There is a lot of scientific controversy about this. Emotions and the way they are expressed vary greatly between cultures and contexts,” says Judith Membrives, head of digitalization at Lafede.cat. This has not stopped AI enthusiasts, education centers, and institutions alike from venturing out to buy and use a technology based in pseudo-science.

“By claiming to infer people’s ‘true’ inner states and making decisions based on these inferences, emotion recognition technologies would cement arbitrary assumptions about people as ground truth,” states Article 19 in its Emotional Entanglement report. The global organization advocates for a ban on the development, testing, deployment, and sale of these technologies because they “violate several fundamental rights” and “fundamentally don’t work.”

The lack of grounded evidence to back the technology “doesn’t stop it from being tested in less scrutinized spaces, like prison,” explains Judith Membrives. “The constant surveillance of prisoners’ behavior and faces invades what little privacy and intimacy they have left,” adds Iñaki Rivera, president of the Observatory of the Penal System and Human Rights of the University of Barcelona. He considers it as the modern and technological version of Bentham’s panopticon, where inmates were constantly watched from every possible angle by an inspector but never got to see him.

Inexorably, the Spanish judicial and penitentiary administration prepares the ground for the development of risk assessment algorithms. The VioGén algorithm has been used nationwide since 2007 to assess the risk of victims of gender-based violence experience a second assault. It has been used in at least six million cases. In Catalonia, RisCanvi is used in prisons since 2010 to assess the risk of inmates' re-offending upon release. However, they remain opaque systems due to a lack of transparency on the part of the authorities.

The future AI Act, already a deterrent

Contrary to what the regional government might have expected, the cat was let out of the bag shortly after the system was introduced. Iñaki Rivera denounces the general opacity of the entire decision-making process: “The government launched it without mentioning it or debating it in Parliament, without even counting on human rights organizations." Rivera is particularly concerned about the storage of information the system gathers: “Who receives it? Who controls it? Who has access to it? Is there a committee of experts overseeing all this?”

The Observatory of the Penal System and Human Rights of the University of Barcelona together with 40 non-profits, including the World Organization Against Torture, denounced the use of this software in a joint press release, in which they call it "a new setback for fundamental rights." 

As the future European regulation on Artificial Intelligence is nearing completion, the Catalan government preferred to play it safe and put an end to the project. And despite the lack of transparency, civil society organizations and individuals welcome this cancellation. All those contacted consider this to be the first effect of the AI Act, even though the text has not yet entered into force.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.