Report

Automating Society

ITALY

By Fabio Chiusi In recent years, discussions in Italy about automation have increased both at the institutional level and by the public at large. Academic institutions, mainstream media and civil soci…

By Fabio Chiusi

In recent years, discussions in Italy about automation have increased both at the institutional level and by the public at large. Academic institutions, mainstream media and civil society have started to debate the consequences of automation in the workplace and on the job market. According to the Italian polling company SWG, 47% of Italians think automation will destroy more jobs than it will create, while 42% think the opposite. [IT 1] Together with this, there is a growing awareness of the role of algorithms, their impact on how information is consumed and how they are used to create propaganda. At the international level, an increasing number of voices are calling for clearly articulated ethics principles to help govern Artificial Intelligence.

Italy has long suffered from a cultural and technological ‘digital divide’, and this gap affects the quality of debate on automation at all levels of society. In addition, civil society organisations are noticeable by their absence from the debate, and there is no specific entity dedicated to evaluate the impact of automation on policy.

Automated decision-making came to the fore and gained popular attention through the “Buona Scuola” education reforms in 2015. [IT 2] These reforms included an algorithm for teachers’ mobility that—as detailed in the section ADM in Action of this chapter—wrongly assigned teachers to new work locations in at least 5,000 instances, according to estimates by the Italian Labour Union. This caused uproar, resulting in several cases being brought before the Italian courts.

Algorithmic systems to detect tax evasion have also been a topic of discussion, however, these debates seldom depart from the ‘Big Brother’ narrative and are consistently tied to the risk of illiberal and excessive surveillance of taxpayers. This may be one of the cultural reasons why algorithms have not yet been widely adopted.

Political debates on aspects of automation – Government and Parliament

White paper on “Artificial Intelligence at the service of citizens”

In March 2018, the so-called Italian Digital Transformation Team—led by former Amazon VP, Diego Piacentini, and made up of some 30 software developers and other technicians—were tasked with designing “the operating system of the country”. They produced a “white paper on Artificial Intelligence at the service of citizens”, which highlighted the potential of AI for public administration. [IT 3] Among the recommendations in the white paper was a suggestion for the creation of a “National Competence Centre”. This would, the report stated, constitute “a point of reference for the implementation of AI in the public administration that can provide predictions of impact and measurement of the social and economic effects of AI systems, in order to enhance the positive effects and reduce risks”.

Trans-disciplinary Centre on AI

Among the other final recommendations included in the white paper, was a proposal to create a “Trans-disciplinary Centre on AI”. This centre would be tasked with debating and supporting “critical reflection on emerging ethical issues” associated with AI. It is also an issue which the Italian Chamber of Deputies is currently considering in a dedicated “Commissione d’Inchiesta” (parliamentary enquiry committee).

Expert Group for the Drafting of a National Strategy on ­Artificial Intelligence

The Italian Ministry of Economic Development (MISE) has issued a call for an “Expert Group for the Drafting of a National Strategy on Artificial Intelligence”. The group will start operations in March 2019. Two of the group’s goals specifically relate to automated decision-making. They are: “a comprehensive review of the legal framework with specific regard to safety and responsibility related to AI-based products and services”, and an “analysis and evaluation of the socio-economic impact of development and widespread adoption of AI-based systems, along with proposals for tools to mitigate the encountered issues”. [IT 4]

Political debates on aspects of automation – Civil Society and Academia

Nexa Center for Internet and Society

The Nexa Research Center for Internet and Society at the Politecnico Di Torino (Polytechnic University of Turin), is a research centre focusing on quantitative and interdisciplinary analysis of the impact the Internet has on society. [IT 5] In their project Data and Algorithm Ethics, Nexa researchers aim to “identify ethical issues raised by the collection and analysis of large amounts of data” and “focus on the problems arising from the increasing complexity and autonomy of some algorithms, especially in the case of machine learning applications; in this case the focus will be on unexpected and unintended consequences and on the responsibilities of designers and data scientists.” [IT 6]

Regulatory and self-regulatory Measures

Declaration of Internet Rights

There is no legislation concerning automated decision-making in Italy. Guidelines and regulations are also missing for areas such as autonomous driving, algorithmic transparency and the use of automation in public policy and by law enforcement agencies.

However, general principles that might be applicable for ADM do exist within the Italian “Declaration of Internet Rights”. This was devised by the “Commission on Internet Rights and Duties” in the Chamber of Deputies under the Renzi administration. [IT 7] The chamber was chaired by Laura Boldrini, who was then the Head of the Chamber of Deputies, and comprised of members of parliament, experts and journalists. The Commission published the Italian Internet Bill of Rights on July 28, 2015.

Of relevance to automated decision-making is Article 8. [IT 8] This explicitly states that “No act, judicial or administrative order or decision that could significantly impact the private sphere of individuals may be based solely on the automated processing of personal data undertaken in order to establish the profile or personality of the data subject”. In addition, the need to avoid any kind of digitally-induced discrimination is repeatedly stated throughout the document, in terms of assuring net neutrality (Art. 4), “access to and processing of data” (Art. 5), “exercising civil and political freedoms” (Art. 10), access to platform services (Art. 12) and “the protection of the dignity of persons” (Art. 13).

ADM in Action

Buona Scuola – algorithm for teachers’ mobility

Within the framework of the “Buona Scuola” education reforms, an automated system was designed to evaluate how to manage the mobility of teachers for the 2016/17 academic year.

The system was developed by HP Italia and Finmeccanica and cost €444,000. [IT 9] The algorithm was supposed to aggregate data on each candidate’s score (“graduatoria”) attributed by law depending on three specific criteria: work experience and roles, 15 preferred destinations, and the actual vacancies. The aim was to provide every teacher with the best possible outcome.

The system was supposed to prioritise the score. If teachers had a higher score, then their vacancy preference in a specific location would be considered before teachers with a lower score. Instead, the algorithm picked preferences without considering the scores. As more and more teachers found themselves assigned to destinations they didn’t state in their preferences, many became disenchanted with the educational reforms. The resulting confusion provoked an uproar and made newspaper headlines.

Following the turmoil, a technical analysis (“Perizia tecnica preliminare”) of the algorithm was performed by a team of engineers from the La Sapienza and Tor Vergata universities in Rome on June 4, 2017. [IT 10] The report that followed detailed exactly why the mistakes happened—and possibly why they had to happen.

The authors stated that the algorithm was not up to the task of combining teachers and destinations in an automated way, because “even the most basic criteria of programming that notoriously apply” to the case at hand “were disattended”. In fact, the code was so badly written that the engineers were left wondering how it was possible that the programmers had designed such a “pompous, redundant, and unmanageable system”. The code was filled with bugs that could cause mistakes, such as the ones that happened.

The algorithm in question consisted of two parts and one of them was developed in the COBOL programming language. This is a language that was conceived in the early ‘60s and was mostly used in the business—not the government—sector. It was a choice that the authors of the technical analysis could not explain—unless one supposes a deliberate will to shield it from scrutiny, through the use of incomprehensible variable names and notes throughout the unusually long and chaotic code writing. This all resulted in several cases coming before the judicial courts throughout Italy, costing yet more of the state’s time and taxpayers’ money.

The “Abbiamo i numeri giusti” project

Designed by the Università Cattolica del Sacro Cuore in Milan, with the contribution of Merck, the “Abbiamo i numeri giusti” (“We have the right numbers”) project is an automated system. It helps health institutions pick the most efficient and effective treatments for patients, while at the same time, it optimises public spending on data and evidence-based grounds. [IT 11]

The system, which aims to maximise patient engagement and medical compliance, will be tested and validated on a nationwide level, starting with trials in five regions (Emilia-Romagna, Veneto, Toscana, Lazio and Puglia) and using their existing databases. A high level ‘Advisory Board’, made up of institutional representatives at both national and regional levels, members of scientific societies and associations of patients, has been created to produce a report on the results of the experiment and to formulate ideas on how best to proceed. According to Merck, over 200,000 people die every year in Europe because they don’t follow their treatment correctly. [IT 12]

RiskER – predict the risk of hospitalization

“RiskER” is a statistical procedure that combines over 500 demographic and health variables in order to ‘predict’ the risk of hospitalization. The automated system was developed by the Agenzia Sanitaria e Sociale (Health and Social Agency) of the Emilia-Romagna Region together with the Jefferson University of Philadelphia. [IT 13] It has been used on an experimental basis in 25 “Case della Salute” (public offices where administrative, social and healthcare services are provided at a local level), involving some 16,000 citizens and 221 general practitioners. [IT 14] The aim is to change the hospitalisation process of patients who, according to the algorithm, show higher health risks. [IT 15] It is hoped that the system will eventually be used in all 81 “Case della Salute” in the region.

The validation process has shown that the algorithm, which was revised in 2016, was good at predicting health risks in the 14 years and over age group and especially in older people, where the risks are higher. During the experiment, the algorithm grouped the population according to four risk categories, allowing doctors to identify high risk patients, and to contact, advise and/or treat them before their conditions became critical. Socio-economic variables might also be added to the algorithm in the future to increase predictive power. [IT 16] The “RiskER” algorithm is part of the EU’s “Sunfrail” program which is aimed at helping the elderly.

S.A.R.I. – facial recognition system

The “S.A.R.I.” (Sistema Automatico di Riconoscimento Immagini), or Automated Image Recognition System, is an automated decision-making tool used by the “Polizia di Stato”, the national police force. [IT 17] This facial recognition technology has two functions: 1) ENTERPRISE, 2) REAL-TIME.

Through the use of “one or more” facial recognition algorithms, the ENTERPRISE function can match the face of an unidentified suspect in an image with facial images stored in databases administered by the police (which are “in the order of millions”, according to the “Capitolato Tecnico” for S.A.R.I., the technical document in which the Interior Ministry detailed the system’s desired requirements [IT 18]). It also provides a ‘rank’ of similarities among several faces with an associated algorithmic score. The ranking, whose criteria are not transparent to the public, can be further refined by both demographic and descriptive variables, such as race, gender and height.

The REAL-TIME function is aimed at monitoring video feeds in real-time in a designated geographical area, and again it matches the recorded subjects to an unspecified ‘watch list’ that contains individuals numbering in the hundreds of thousands. Whenever a match is found, the system issues an alert for operators.

The first solution was developed by Parsec 3.26 SRL for €827,000 [IT 19]; the second was developed by BT Italia S.p.a./Nergal Consulting S.r.l. for €534,000. [IT 20]

The S.A.R.I. system made headlines in September 2018, when it was revealed that two burglars in Brescia had, for the first time, been identified and subsequently apprehended as a result of the algorithmic matching obtained by the system. This prompted further analyses by law enforcement agents, who later confirmed that the suspects were, in fact, the two men suggested by the automated system.

Questions arose as to how many individuals are included in the databases that feed into the algorithmic choices (the police itself hinted at 16 million people, with no further specification as to their composition) and why. [IT 21] There were also questions about the reliability of the algorithmic matching (in terms of false positives and false negatives), the safeguards for matched individuals, the cybersecurity and privacy profiles of the system.

ISA – Fiscal Reliability Index

Algorithms have slowly made their way into fiscal legislation in Italy. Many instruments have been devised over the last decade, with the aim of automatically checking for tax ­evasion.

Among them is the search for discrepancies between someone’s stated income and her/his actual spending patterns (“Redditometro”) or between declared invoices and standard of living (“Spesometro”). [IT 22] In addition, the forthcoming “Risparmiometro”, will try to automatically recognise anomalies between stated income, savings and standard of living by matching information from several databases.

As none of these instruments made a difference in combatting tax evasion, a new fiscal reliability index (“ISA”, Indice Sintetico di Affidabilità) was created, and it became law in 2016/2017. The ISA is an arithmetic average of a set of elementary indices of fiscal reliability—including coherence with previous fiscal declarations, consistent compliance and an analysis of past and current anomalies—summarised using a 0-10 score that embodies the fiscal ‘virtue’ of the analysed subject within a timeframe initially set at the past 8 years. [IT 23]

The idea behind the ISA is to revolutionise the logic of fiscal compliance, shifting from a ‘punitive’ approach to one that is based on ‘rewards’ for those who have consistently shown fiscal reliability, according to the index. As a result, taxpayers with higher scores will enjoy a reward system that includes an exclusion from certain fiscal inspections, and simpler conditions for compliance.

KeyCrime – predictive policing algorithm

KeyCrime is predictive policing software based on an algorithm of criminal behaviour analysis. [IT 24] It was developed over the last 15 years by former investigative police officer Mario Venturi and designed to automatically analyse criminal behaviour, help identify trends and suggest ways of thwarting future crimes.

Reportedly, the system can now sift through some 11,000 variables for each crime. These range from the obvious (time, location and appearance) to the less obvious (reconstructions of witnesses and suspects during subsequent interviews, and even the modus operandi of the criminal). [IT 25]

Video feeds are included in the analysed data. The idea behind the software is to rationalise the use of the police force and automatically deploy officers exactly where they are needed. However, this is not done by providing predictions about the areas in which certain crimes are mostly likely to be committed, but by giving clear indications as to where sought-after suspects might be about to strike, and therefore making it easier to apprehend him or her. [IT 26]

In addition, the analytic capabilities of the KeyCrime software are used to attribute a series of crimes to the same suspect. Data gained in actual usage shows that when the number of crimes committed by the same suspect exceed four, the algorithm is 9% more effective than traditional methods.

Over the years, KeyCrime has been successfully deployed in the city of Milan. According to police data divulged to the press in 2015, this led to a 23% reduction in pharmacy robberies, and a suspect identification rate of 70% in bank robberies. [IT 27] [IT 28]

eSecurity – predictive urban security in Trento

This “eSecurity” project labels itself as “the first experimental laboratory of predictive urban security”. [IT 29] It was co-funded by the European Commission and coordinated by the eCrime research group of the Faculty of Law at the University of Trento, in partnership with the ICT Centre of Fondazione Bruno Kessler, the Trento Police Department and the Municipality of Trento. [IT 30]

The project builds on the ‘hot spots’ philosophy. This is the idea that “in any urban environment, crime and deviance concentrate in some areas (streets, squares, etc.) and that past victimization predicts future victimization”. It is an “information system for police forces and local administrations”, modelled upon the predictive policing models adopted in the UK and the U.S., and aimed at providing complex automated assistance to law enforcement agencies and decision-makers.

The expanded “predictive urban security” model adopted by eSecurity employs algorithms that can: 1) use “past geo-referenced crimes” and smart city data, 2) consider “the concentration of victimisation, insecurity and urban disorder at city level”, 3) try “to not only predict ‘where’ and ‘when’ specific types of criminality and deviance will occur but also to understand ‘why’ they will occur”, and 4) are explicitly aimed at providing data-based insights to policy-makers for ‘smart’ and effective urban security planning.

 

Fabio Chiusi

Fabio Chiusi is tech-policy advisor at the Italian Chamber of Deputies and Adjunct Professor in “Journalism and New Media” at the University of San Marino. As a reporter and columnist, he wrote about digital politics and culture for La Repubblica, L’Espresso, La Lettura del Corriere della Sera, Wired and Valigia Blu. He coordinated the PuntoZero research project on transparency in social media campaigning and wrote reports about the cybersecurity of IoT and the use of personal data during the 2018 Italian election. He is a Fellow at the Nexa Center for Internet & Society of the Politecnico of Turin, and author of essays on digital democracy and mass surveillance. His latest book is “Io non sono qui. Visioni e inquietudini da un futuro presente” (November 2018, DeA Planeta).
Published: January 29, 2019

Category: report