At the beginning of 2020, just as the whole world was grappling with increasing evidence of the discriminatory and racist outcomes of face recognition technologies, Italy mulled its widespread adoption in football stadiums as an essential tool in the fight against racism.
The issue had been at the forefront of the country’s mainstream media coverage over the previous months, most notably because of the repeated racist chants and slurs made by supporters of Verona, Lazio, and other teams against Brescia superstar Mario Balotelli, who is Black.
In February, right before the COVID-19 pandemic took Italy by storm, the Minister for Youth Policies and Sports, Vincenzo Spadafora, had had enough, and he thought that technology should come to the rescue. He, therefore, announced that the Italian government was about to experiment with, and deploy, “new technologies to support the work of police forces” in football stadiums all over the country. And not just to prevent individuals who are banned from sports competitions from entering Italian stadiums: he wanted them to spot racist supporters, in real-time.
Spadafora did not specify which “new technologies” he was talking about. But others did. Luigi De Siervo, CEO of Lega Serie A, Italy’s top football league, spoke about “the silent work” that institutions were doing. “We need to get all of the people who are ruining this wonderful sport, one by one”, he said. “With face recognition, this is now possible.” According to La Repubblica, De Siervo even went to Israel to study its latest applications.
The head of the Italian Football Federation (Federazione Italiana Giuoco Calcio, or FIGC), Gabriele Gravina, also talked about the same idea: “using technology will lead to high-definition face recognition of individuals who will be guilty of committing such crimes.” he had said in January. To Gravina, the project was meant to see the light in a relatively short time span: “Experimentation will start rapidly, and then we will sensitize all interested parties, both clubs and leagues, towards using this technology.”
The police want to listen to your conversations in the football stands
But it’s not just face recognition: the plan also included adopting sound surveillance systems, so-called “sonic radars”. Already, in October 2019, Gravina had hypothesized the adoption of the “passive radars” currently in use by anti-terrorism units, to “spot the source of a noise”, including private conversations between supporters in the stands — a feature that Gravina himself described as problematic in terms of privacy. However, this supposedly meant that such systems would also be able to pick up racist conversations. Offenders could then be identified through face recognition.
Deploying such technologies would also bear legal consequences, according to Gravina. In fact, Federcalcio was contextually planning to adopt a new regime of “objective responsibility” for football clubs: only those implementing the adequate technologies would be shielded from liability for acts committed by supporters. This way, according to Federcalcio, they would also be free from extortions by hardcore, “ultra” fans, who often use racism as a threat to attain the goals of their blackmailing strategies. For example, in September 2019, when Juventus FC stopped giving in to alleged extortion — demanding cheap season tickets, free drinks at stadium bars, and even invitations to club parties — ultras started singing racist chants as a reprisal, knowing all too well that it would have meant sanctions for the club. As a result, 12 ultras were arrested.
While various media reports argued that some staffers within the Interior Ministry had serious doubts over the effectiveness of such technologies in real-life scenarios, others uncritically praised them as “the next frontier of anti-racism”. De Siervo even compared himself to former UK Prime Minister, Margaret Thatcher: “We will achieve in two years what Thatcher achieved in ten”, he argued.
But then, the pandemic struck, and everything changed.
The government vaguely confirms its plans, others remain silent
With the COVID-19 outbreak raging throughout northern Italy, the government brought Serie A to a halt on March 10. Stadiums would remain closed to the public until June, crucially delaying the planned deployments.
And yet, in an email exchange in May 2020, a spokesperson for the Ministry of Youth and Sports “surely” — but vaguely — confirmed that “we are trying to implement a series of tools to prevent racism in stadiums and that we’re working on that together with the FIGC.” No updates were deemed necessary in a further written exchange in September, and no further answers have been provided to the precise questions asked by AlgorithmWatch concerning the plan.
Neither Lega Calcio Serie A nor Federcalcio replied to our questions also. And yet, that hasn’t stopped clubs from experimenting with face recognition in stadiums. Thanks to media reports, we are aware of pilot deployments both in Udine and Naples, where 190 face recognition cameras were activated in September 2019. A month later, video surveillance at the Neapolitan San Paolo stadium was already crucial in identifying some 32 supporters who were then each issued with a fine 500 euros — 166 euros if paid within 5 days of notice — for the violation of the stadium’s security regulations.
Yet another job for S.A.R.I.
But it’s the Udine pilot that introduces the most relevant development. As detailed in the first edition of our ‘Automating Society’ report, an “automated system for image recognition” (“Sistema Automatico di Riconoscimento Immagini”, or “S.A.R.I.”) was already in use by the Italian police to apprehend criminals, both to match the face image of a suspect with those included in databases maintained by law enforcement (“ENTERPRISE” function) and to perform live face recognition from real-time video feeds (“REAL TIME” function).
Conceived in 2014, and deployed in several different settings (from the port of Bari to a casino in Las Vegas), this system was also trialed in June 2019 at the Stadio Friuli, Udine, before the final match of the Under 21 European Cup. The objective was to pilot the monitoring function at the gates to keep supporters out who had previously received a restraining order that bans them from entry.
AlgorithmWatch repeatedly asked both Udinese FC and Reco 3.26, S.A.R.I.’s Puglia-based creators, about the results of that trial deployment, but the football club delegated all answers to the developers, and the developers never replied to any questions.
And yet, even in the absence of any publicly available evidence, De Siervo hinted at a future adoption of S.A.R.I. for all Italian football stadiums, explicitly stating that the system should realize this technological revamp of football stadium security would be “the same as that in use by police forces”.
The lack of any public discussion around this topic did not prevent S.A.R.I. from being portrayed by mainstream media as a default security measure in both Inter and Milan FC’s plans for a new stadium to replace San Siro in the Lombardy capital. Here, it would not only check for banned individuals, but also for “potential terrorists”, according to Corriere della Sera Milano — its main selling point being that the software would allegedly allow to “look at a face at a 60 meters distance”.
But, again, there’s more. According to the same article, the plan that both Inter and Milan football clubs have shared with the municipality’s office includes “geolocalisation and sound sensors”, together with “software that is able to recognise abnormal behaviour, such as “loitering” or “the sudden presence of an object in a certain place.”
Former president of Italian DPA: “caution” should be applied
Racism. Terrorism. Loitering. Face recognition in stadiums takes on many guises — and even more deployments. Face recognition cameras are, for example, also installed at Atalanta’s newly named “Gewiss Stadium” in Bergamo. The restyled “Curva Pisani” features 7 turnstiles and 40 video surveillance cameras, some of them equipped with face recognition technology. In 2016, the Italian Data Protection Authority authorized the use of the technology in Rome’s Stadio Olimpico.
But things have changed since then. In recent years, evidence of inaccurate and discriminatory outcomes of face recognition technologies has increased. And that evidence should matter, argues the former president of the Italian DPA, Antonello Soro, in a written interview with AlgorithmWatch conducted when he was still in office.
In the interview, Soro acknowledged that “the absolutely peculiar context of [a football] stadium is one… which … extensive use of biometric technologies… [is]… more evident.” As a result, safeguards are all the more important. And, since 2016, the “peculiar dangers” of biometric surveillance — "especially when instrumental to an algorithmic decision-making process” — have become apparent, Soro calls for “further caution” before deploying such systems. In particular, they have to be made consistent with the law and need a privacy impact assessment. This is what the DPA has been talking about in his dialogue with the FIGC too, Soro wrote.
However, the “real time” function of S.A.R.I., the one that would be relevant for Spadafora’s technological plans against racism, has only been subject to “preliminary dialogues” with the Interior Ministry during his time in office, wrote Soro. As a result, the proposal “has not been translated into a detailed project” — mainly because of the lack of an “adequate normative framework”, one that “details checks and balances” of the treatment of biometric data, especially for such a “significative one, both in terms of size and typology of performed operations.”
No further updates have been provided under the current presidency of the DPA, which started in July 2020.
The COVID-19 pandemic as a way to repurpose face recognition
In the absence of clear transparency rules, pilots and deployments of face recognition technology began to multiply, hidden from public view. But then, the COVID-19 pandemic struck, halting competitions and emptying public spaces.
A foundational issue then came to the fore: when stadiums reopen, supporters will need to wear a mask to be allowed to enter the premises. But that spells doom for the accuracy of face recognition technology, as documented in a recent National Institute of Standards and Technology (NIST) study. What to do? Could face recognition be upgraded in such a way that it can still recognize people, even though they are wearing masks?
While many companies claim to have solved the issue, the available evidence suggests otherwise. This is possibly the reason why current pilots (and PR material from vendors) mostly focus on face recognition technology, without, rather ambitiously, claiming to recognize an individual who is wearing a mask. The solutions that claim to do this mostly work in conjunction with thermal scanning technology and other tools used to scan public health during the pandemic.
One example of a pilot project is in Turin. The “Feel Safe” system — an “automated anti-virus gate” — was trialed at the match between Turin and Hellas Verona. The gate is designed to safely speed up the entrance of fans and is produced by the Milan-based company, Worldwide Exhibition System (WES). No information is available regarding what kind of biometric data the system takes into account, not even in corporate or PR material. A WES presentation only says that the FeelSafe system can “interface with a company’s entry management system, thanks to a system of biometric controls.”
Italy is, of course, not alone in experimenting with face recognition technology in stadiums. While Denmark, Belgium, and the Netherlands deployed the system for security checks — with varying results and legal challenges — it is also being thought about as a tool “to give fans personalized experiences” at the Dodger’s Stadium in Los Angeles.
And yet, no country — apart from Italy — has tried to apply face and sound recognition systems to identify the authors of racist slurs.
Whether this idea will actually be implemented remains to be seen. We’ll be watching.
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!