By Fabio Chiusi
The COVID-19 pandemic has spurred the deployment of a plethora of automated decision-making (ADM) systems all over Europe. High hopes have been placed by both local administrations and national governments in applications and devices aimed at containing the outbreak of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) through automation, thus providing a much needed alternative to lockdown measures that limit personal freedoms and strain the economies.
Smartphone apps have been launched to help speeding up and complementing the manual contact tracing efforts put in place by health authorities, causing a heated international debate around how to best balance privacy and human rights with the urgent need to monitor and curb the spread of the disease. QR codes have been issued to enforce quarantine orders and log check-ins in shops and public places. Thermal scanners, at times powered by facial recognition technologies, are rapidly becoming the new normal to access venues as diverse as supermarkets, stadiums and museums. Artificial intelligence more generally — and vaguely — has been enrolled in the analysis of large masses of aggregate, anonymised population data, in order to get real-time insights on crowd behaviour, predict risk areas and model public policy interventions.
Several academic institutions and civil society organisations are keeping track of these developments, from the Ada Lovelace’s Institute ‘Digital Contact Tracing Tracker’ to MIT’s ‘COVID Tracing Tracker’ and Privacy International’s ‘Tracking the Global Response to COVID-19’. None, however, specifically concentrates on aspects related to automated decision-making within Europe. As they fall within the scope of AlgorithmWatch and Bertelsmann Stiftung’s ‘Automating Society’ project, and with its 2020 edition due out in October, we felt that we could not miss out on such relevant developments.
This is why we decided to publish this “preview report”, fully dedicated to an initial mapping and exploration of ADM systems deployed throughout Europe as a consequence of the COVID-19 outbreak. Especially given the uncertainties around the resurgence of the virus that are present at the time of writing, we felt it was necessary and urgent to provide a first snapshot of the socio-technical systems deployed against the virus in the 16 European countries investigated in the ‘Automating Society’ project.
It is, by all means, incomplete: too much is happening in the field on a daily basis, globally, to even try and claim exhaustiveness, and many such systems are still opaque and/or on trial. But it will provide a contextualisation of why so many ADM systems are being adopted, some explanation of their actual workings, and thoughts around their consequences in terms of human rights, democracy and public health.
Some comparison will also be drawn between the main features of ADM-based responses within the EU and outside of it, highlighting some significant differences in how the interplay of technology and rights is conceived in different parts of the world. At the same time, the report will show how and why “technological solutionism”, a flawed ideology that conceives of every social problem as a “bug” in need of a “fix” through technology, is common to many such diverse endeavours instead — even in the face of scant evidence in favour of the effectiveness of existing anti-COVID ADM systems.
A more detailed country-by-country analysis is then presented in the second part of this report, thanks to the efforts of the outstanding network of researchers that has been working on the Automating Society project over the last year, thus providing unique on the ground insights from each of them.
What are we talking about when we talk about ADM in COVID-19 responses?
Digital technologies have been touted as a solution to the COVID-19 outbreak since early in the pandemic. But while claims around “AI”, a vague and much hyped term to which AlgorithmWatch has long preferred the more rigorous locution “ADM”, being able to reverse the course of the disease have been quickly proven too enthusiastic in the face of available evidence, much of the attention in public discussions revolved around how to complement manual contact tracing efforts put in place by health authorities with automation.
The idea is simple: total or even partial lockdowns of the kind witnessed all around the world during the first wave of the pandemic severely affect both individual rights and the economy, and therefore should be avoided, if possible, in the future. The most efficient and secure way to do so would therefore be, according to many in both government and academia, to deploy “smart” solutions to quickly spot virus carriers, both symptomatic and asymptomatic, and then more easily and reliably reconstruct their contacts over the last two weeks. This way, whoever has been exposed to the risk of contracting the COVID-19 disease is promptly warned through a notification on her/his smartphone, allowing potentially infected individuals to then immediately look for medical assistance (eg, by getting tested and, if necessary, quarantined).
Is this plan realistic? And how to translate it into action? Answers varied greatly, even within Europe. Sweden, for example, hasn’t so far adopted a digital contact tracing app, and isn’t planning to do so. Scotland also seriously considered not having one, even arguing that it might never be needed. At the opposite side of the spectrum, Slovenia has developed a legislative framework for the creation of an app that is instead mandatory to both citizens who test positive and those who want to travel, even inside the country.
Ideas on the acceptable level of intrusion in the lives of European citizens by such apps has also been a divisive issue, leading to very different technological solutions. Some, more oriented on preserving individual privacy, resorted to Bluetooth low energy technology, either within a “centralised” or a “decentralised” architecture — a distinction that, although merely technical on the face of it, has actually embodied the gist of the public debate around the balancing of human rights and disease surveillance. Other applications have been more focused on making good use of epidemiological data gathered by health authorities, and in particular in the early spotting of risky locations or clusters, and therefore adopted GPS technology.
The former batch of applications is therefore based on proximity data, while the latter is based on location data, with very different implications not only for rights, but also uptake, actual functioning and efficacy.
In order to better understand this and other crucial distinctions, we need to first get a closer look at what ADM systems are, how they are institutionally and geopolitically framed in the context of the COVID-19 disease, and more specifically which processes and decisions are entailed by different types of ADM systems, to what degree of human involvement.
Geopolitics of ADM systems in the pandemic
Why “ADM”, and not “AI”
In the first edition of the ‘Automating Society’ report, we defined an “automated decision-making system” as “a socio-technological framework that encompasses a decision-making model, an algorithm that translates this model into computable code, the data this code uses as an input—either to ‘learn’ from it or to analyse it by applying the model—and the entire political and economic environment surrounding its use”.
Contrarily to “AI”, then, ADM systems are not mere technologies. Rather, they are ways in which a certain technology — which may be far less sophisticated or “intelligent” than deep learning algorithms — is inserted within a decision-making process.
In the context of COVID-19, for example, the same technology can be used for very different purposes, depending on the rationale behind it. Data collected through a Bluetooth LTE-based smartphone app can for example be voluntarily and anonymously shared either with a central server or with smartphones of potentially infected individuals, with no consequences or sanctions whatsoever in case a citizen decides not to download it. Or, the same technology can be adopted within a much more rights-invasive solution, working in tandem with GPS to continuously provide a citizen’s location to the authorities, at times within mandatory schemes — and with harsh sanctions in case they are not respected.
Different governance models are therefore reflected in different ADM systems.
Mandatory and rights invasive ADM systems: the China model
Authoritarian countries made full use of the digital surveillance infrastructure they already had in place, and even added further equipment and devices, to deliver ADM solutions that strongly prioritise public health and safety concerns over individual rights. China, for example, employed a colour-based rating system, the Alipay Health Code, using big data “to draw automated conclusions about whether someone is a contagion risk”, wrote the New York Times. Under this model of ADM, citizens have to fill a form with their personal details, to be then presented with a QR code in three colours: “A green code enables its holder to move about unrestricted. Someone with a yellow code may be asked to stay home for seven days. Red means a two-week quarantine”. A scan is necessary to visit “office buildings, shopping malls, residential compounds and metro systems”, according to a Reuters report.
Even without considering that 18 out of the 20 most video-surveilled cities in the world are in China (Comparitech), that this Moloch comprised of 54% of all CCTV cameras of the world is being repurposed with facial recognition technology “to scan crowds for fever and identify individuals not wearing masks”, and that “non-contact thermal augmented reality” smart glasses supplied by AI start-up Rokid Corp are also being added to the surveillance apparatus to “enforce social distancing”, it is easy to see how radical and extreme this view of ADM is.
The Alipay rating system is not only mandatory, but it also autonomously, and opaquely, decides the health status — and consequent rights — of individuals in a country in which the population is getting accustomed to having algorithmic credit scoring systems judge all aspects of their lives, both private and public.
An increasing number of countries is walking in China’s footsteps. Face recognition, for example, has been adopted in Russia against COVID-19 since the beginning of the outbreak in the country. Late in March, the BBC reported that “Moscow police claimed to have caught and fined 200 people who violated quarantine and self-isolation using facial recognition and a 170,000-camera system”, adding that “according to a Russian media report some of the alleged violators who were fined had been outside for less than half a minute before they were picked up by a camera.”
Were it not enough, Moscow authorities also mandated download of a geolocation tracking app and registration of a government-issued QR code, similar to that in use in China: starting from April, it has been reported to be necessary “for each and every trip to the pharmacy, grocer, or even just to walk (a) dog”, wrote Gizmodo. Going about without a QR code could mean jail time.
In the meantime, the app would also send push notifications to instruct quarantined and self-isolated citizens to take and send a selfie “as a proof of not having left the house without the phone”, wrote Human Rights Watch. “If users miss a notification, they are automatically fined 4,000 rubles” — even when, according to “hundreds, if not thousands” of them, the fine is wrongly issued because of bugs and glitches in the software.
Mandatory and rights-invasive ADM systems largely concern countries outside of Europe. Around the same time, on April 11, the South Korean government announced plans “to strap tracking wristbands on people who defy quarantine orders”, and “location histories” about individuals who tested positive were, and have so far, been regularly collected by the health authorities — and even published online, with serious consequences in terms of shaming of affected individuals.
When nightclubs in Seoul become potential hotspot for a new wave of COVID infections, the Guardian wrote that “lurid reporting, along with South Korea’s use of the trace and test method, led to members of the gay community reporting feeling scared to get tested and even suicidal”.
Also, the app to enforce quarantines was found to have “serious security flaws that made private information vulnerable to hackers”, according to New York Times reporting confirmed by the government. Crucially, “the flaws could also have allowed hackers to tamper with data to make it look like users of the app were either violating quarantine orders or still in quarantine despite actually being somewhere else.”
India also made the download of its Bluetooth and GPS-based contact tracing app, “Aarogya Setu”, compulsory for all workers in both the public and private sector on May 1st. With similar issues: a hacker claimed to be able to tamper with the app to always appear “safe”, and several privacy hiccups occurred — all of this within a broader biometrics surveillance infrastructure in which, just like in China, facial recognition is rapidly becoming the new normal, and opaquely so.
The more intrusive the ADM system, the more severe the consequences. In Israel, contact tracing has been performed both through a location-based tracking app, Magen, and a digital contact tracing program run by the country’s intelligence agency, Shin Bet. Both proved to have serious flaws: Magen showed inaccurate location records, while the intelligence programme has been renewed in July even after being widely criticised for forcing people in quarantine by mistake, with complaints reportedly going unanswered on a regular basis.
A 1-10 rating system based on mobile data analysis (“Coronameter”) and even voice surveillance are being explored at the time of writing, showing a clear trend for countries that made digital mass surveillance an integral part of their public health response to the COVID-19 disease: no matter how invasive, it is never enough.
Echoes in Europe: geolocated selfies and bracelets
Even though this radical and ultimately repressive model of ADM systems deployment mainly concerns Asia and the Middle East, similarities can be found in some European countries as well. Strong analogies with the Russian selfie-based quarantine app can be found in Poland’s “Kwarantanna domowa” app, that also uses geolocation and face recognition technology to ensure that relevant people are quarantined; and again, same as in Russia, the app download is mandatory.
In May, the same system has been adopted in Hungary, too.
In an utterly dystopian turn, Bahrain even tied its app to a national television show, called ‘Are you home?’, which according to Amnesty “offered prizes to individuals who stayed at home during Ramadan. Using contact details gathered through the app, 10 phone numbers were randomly selected every day using a computer programme, and those numbers were called live on air to check if the app users were at home”.
Lithuania’s application has also been suspended by the country’s data authority for failing to comply with the EU’s privacy regulation, the GDPR.
Liechtenstein took a different path instead, giving priority to a pilot study that aims to investigate whether wearable devices can help with the early detection of COVID-19. For this reason, it has launched a study, called ‘COVI-GAPP’, in which 2,200 citizens are given a biometric bracelet to collect “vital bodily metrics including skin temperature, breathing rate and heart rate”, and then have those data sent back to a Swiss laboratory for analysis. The idea behind an experiment that will ultimately involve all of the citizens in the country is that by analysing physiological vital signs “a new algorithm for the sensory armband may be developed that can recognize COVID-19 at an early stage, even if no typical symptoms of the disease are present” (AFP).
Anti-COVID bracelets are, again, more common outside of Europe, and namely in countries such as Hong Kong, Singapore, Saudi Arabia, the UAE, and Jordan — where, however, are mainly deployed to enforce quarantine orders and other COVID-19 restrictions.
Such and other uses are cause of concern to digital rights activists. The Electronic Frontier Foundation, for example, wrote that wearables, in the context of the pandemic, “remain an unproven technology that might do little to contain the virus, and should at most be a supplement to primary public health measures like widespread testing and manual contact tracing”. Also, and importantly, “everyone should have the right not to wear a tracking token, and to take it off whenever they wish”.
This does not appear to be the case at Michigan’s Albion College, where students are required to download an app that “tracks their location and ongoing health data” in an attempt to turn the campus into a safe “COVID bubble”. In an email obtained by Newsweek, “the college told students (…) that they must have their location services on at all times”.
And not just that: the app, called “Aura”, also notifies “the school's administration if a student leaves the campus's bubble”. And if a student is found beyond the 4,5 mile perimeter, the sanction is suspension.
Yet another echo of the Chinese model of ADM, heard in a democratic context.
WHO guidelines paint a different, and better, picture for ADM
But is this invasive model of deployment of ADM systems against COVID-19 supported by World Health Organisation principles? Not really.
For one, in its guidelines published on May 10, ‘Contact tracing in the context of COVID-19’, the WHO clearly states that adoption of “proximity tracing” systems should not be mandatory, exactly because “such uses of data may also threaten fundamental human rights and liberties during and after the COVID-19 pandemic”.
The concern here is the normalisation of mass surveillance, under the guise of an urgent and (allegedly) effective solution to the pandemic. “Surveillance can quickly traverse the blurred line between disease surveillance and population surveillance”, writes the WHO. “Thus, there is a need for laws, policies and oversight mechanisms to place strict limits on the use of digital proximity tracking technologies and on any research that uses the data generated by such technologies.”
Not only downloading any apps should be voluntary: users should also be free to delete them anytime, to avoid the risk of exacerbating existing inequalities. This is not the case in Hungary, for example, where the home quarantine app is only voluntary until one decides to download it, at which point it becomes mandatory. Failing to comply with geolocalised facial image and SMS authentication immediately amounts to an infringement that can be sanctioned by a fine.
This is also problematic in terms of WHO principles, as no benefits should be attached to the decision of downloading one such apps, and no discrimination should follow from that of not downloading them. Again, this is not the case with the Hungarian quarantine app: “If someone does not voluntarily agree to install the software, the police will go out more often to personally check that the house quarantine is being complied with”, wrote Index.
The WHO then crucially warns against rushed deployments of solutions whose efficacy is still unproven. And yet, while “It is essential to measure the effectiveness and impact of these technologies”, we don’t really know how to fill the gap: "Currently, there are no established methods for assessing the effectiveness of digital proximity tracking,” read the guidelines.
What can and should be done, argues the WHO document, is providing “transparency and explainability” of the adopted ADM systems. “Meaningful information about the existence of automated decision-making and how risk predictions are made” is included, and namely “the types of data collected, how data will be stored and shared, and how long data shall be retained”.
Code of ADM systems should be open sourced, and “algorithmic models used to process data and assess risk of transmission must be reliable, verified, and validated”. An independent oversight body should also be established to check for respect of ethics and human rights, according to the guidelines.
Importantly, download should be based on consent, and in particular notification that a user has tested positive should not be automatically transmitted by the app, but needs confirmation from a health professional.
In any case, the WHO warns against technological solutionism, arguing that digital proximity tracing is “not essential” and only meant to complement, not replace, manual contact tracing efforts: “This technology cannot capture all the situations in which a user may acquire COVID-19, and it cannot replace traditional person-to-person public health contact tracing, testing or outreach which is usually done over the phone or face to face. Digital proximity tracking applications can only be effective in terms of providing data to help with the COVID-19 response when they are fully integrated into an existing public health system and national pandemic response.”
This is consistent with AlgorithmWatch’s policy position on digital contact tracing apps.
The EU alternative: public health, digital technologies and human rights are not incompatible
The approach that most closely resembles the WHO guidelines arguably comes from the European Union. In a number of documents, EU institutions tried to combine the (alleged) benefits of automated decision-making systems with the respect of privacy, human rights, and democratic checks and balances, thus providing a much needed alternative to the China model, and giving criteria and principles that any technological response to the coronavirus outbreak should respect, if it has to comply with European laws and values.
As the European Data Privacy Board clarified since March, in fact, the processing of personal data to face the public health emergency caused by COVID-19 is not incompatible with the GDPR. However, as noted by EDPB Chair, Andrea Jelinek, “even in these exceptional times, the data controller and processor must ensure the protection of the personal data of the data subjects”.
On April 8, 2020, the European Commission issued a Recommendation “on a common Union toolbox for the use of technology and data to combat and exit from the COVID-19 crisis, in particular concerning mobile applications and the use of anonymised mobility data”.
The document sets the stage for a EU-wide strategy on how to use data and technology in tackling the coronavirus outbreak, and hinges on two premises. First, that the pandemic is an issue that can’t be properly tackled at national level: “a fragmented and uncoordinated approach risks hampering the effectiveness of measures aimed at combating the COVID-19 crisis”, it reads, “whilst also causing serious harm to the single market and to fundamental rights and freedoms”. Imagine a EU citizen travelling across borders and having to download different apps that are unable to communicate between each other.
The drawbacks of a fragmented approach are already apparent in the USA, where different and even competing views of how to deploy ADM systems, absent overarching strategies and principles, lead to a situation in which some States opted for decentralised Bluetooth-based “exposure notification” (South Dakota, Alabama, Virginia and North Carolina), others deploying their own solution (eg. Utah, using “a combination of GPS, WiFi, IP address, cellular location data and Bluetooth to identify contacts”) and many not considering any ADM systems at all. This resulted in a patchwork of non-interoperable solutions and contradictory health policies, with users showing legitimate concerns over fundamental rights and efficacy that mostly lead to confusion and low uptake rates.
In Minnesota, for example, officials “have been using what they describe, without going into much detail, as contact-tracing in order to build out a picture of protestor affiliations”, wrote BGR, while a location-based app initially developed in North Dakota was found to be "sharing location data with Foursquare and an advertising ID with Google”, according to Fast Company.
As a result, Harvard professor Jonathan Zittrain denounced “a plateau in visible activity on the tech side of the ledger”, even wondering whether digital contact tracing in the US is “over before it began”.
This is exactly what EU principles — and interoperability specifications — were meant to avoid.
The second premise in the EU Commission document is acknowledging that digital technologies and data “have a valuable role to play in combating the COVID-19 crisis” — but only assuming they meet certain conditions. This translates into a call for a “pan-European approach for the use of mobile applications, coordinated at Union level”, while at the same time respecting privacy and fundamental rights.
In order to do that, reads the document, “preference” should be given “for the least intrusive yet effective measures, including the use of proximity data and the avoidance of processing data on location or movements of individuals, and the use of anonymised and aggregated data where possible”.
The Recommendation also calls for “a common scheme for using anonymized and aggregated data on mobility of populations”, to better predict the evolution of the pandemic, evaluate the effectiveness of Member States’ responses and inform coordinated strategies — but again, “safeguards” must be “put in place to prevent de-anonymisation and avoid reidentifications of individuals”.
Criteria for a democratic use of digital contact tracing apps detailed in subsequent EU documents are consistent with the rationale clearly expressed by European Data Protection Supervisor, Wojciech Wiewiórowski: “Humanity does not need to commit to a trade-off between privacy and data protection from one side, and public health, on the other. Democracies in the age of Covid-19 must and can have them both.”
When a first iteration of the “common EU toolbox” was presented by the eHealth Network — a voluntary network created under article 14 of Directive 2011/24/EU — on April 16, digital contact tracing was in fact envisioned as “fully compliant with the EU data protection and privacy rules”, voluntarily adopted and “dismantled as soon as no longer needed”, based on proximity data (Bluetooth) rather than location data (GPS), cybersecure and interoperable.
Crucially, the eHealth Network reiterates — just like the WHO did — that digital contact tracing is a complement, rather than a substitute, for manual contact tracing, and calls for “monitoring the effectiveness of the apps”: “Member States should develop a set of KPIs (Key Performance Indicators, ndr) to assess/reflect the effectiveness of the apps in supporting contact tracing”.
While a heated debate among proponents of a centralised (ROBERT) versus decentralised (DP-3T) architecture for such apps followed among researchers, academics and lawmakers, with the PEPP-PT consortium taking and rapidly losing center stage at EU level, important provisions on automated decision-making aspects of data-based responses to the pandemic have largely gone unnoticed.
The eHealth Network’s toolbox, for example, reminds that fully automated processing of decisions concerning the warnings issued through apps should be prohibited, consistently with art. 22 of the GDPR. Transparency is also key, as the document requires code of such apps to be open source, “public and available for review”.
More generally, all interventions by EU institutions on ADM-related aspects of the COVID-19 crisis revolve around the idea of building trust between citizens and health authorities. This is only possible when privacy and fundamental rights are fully respected within Europe, they clearly state, while at the same time — again, similarly to what the WHO prescribes — preventing any discriminatory outcomes for those who freely decide to not adopt contact tracing apps: no “negative consequences” should follow from such decisions.
These are the pillars of a model of ADM systems deployment that is fundamentally different, and radically opposed to that adopted in China and, to a lesser degree, other countries in Asia and the Middle East.
ADM systems to complement contact tracing efforts
Nowhere has this fundamental clash between different models of ADM been more apparent than in the global debate around digital apps to complement contact tracing efforts. Born in the midst of the first, tragic wave of COVID-19 infections, it has been initially informed by a (mostly tech-solutionist) sense of urgency and necessity that seemed to justify unprecedented intrusions of government surveillance into the lives of millions of citizens living in democratic countries.
Tech enthusiasts the world over argued that privacy and other fundamental rights were to be somehow sacrificed — or at least could be sacrificed — to enable public health, and especially to avoid further total lockdowns. Some, for example in Italy, even theorised to do away with privacy altogether, as respecting it would have been an unnecessary burden on the ADM system, according to this view.
As previously said, a heated global discussion around which technology would better help speeding up contact tracing endeavours ensued, with two main camps: one in favour of using GPS tracking (Norway, Iceland and Bulgaria), and therefore location data (at times, as in the Czech Republic, integrated with bank and payments data, and data from apps downloaded by the user), and the other preferring Bluetooth Low Energy, and therefore proximity data.
Soon, this camp also split in two opposing lines of thought. Countries like France, the UK and initially Germany tried to develop “centralised” Bluetooth-based solutions, while Italy, Switzerland, Denmark, Estonia (and, ultimately, Germany itself) opted for a “decentralised” solution. In this latter case the “contact tracing” app is not doing contact tracing at all, but instead it is merely signaling that two phones have been close to each other for enough time to consider the encounter at risk, and therefore issue a notification of potential exposure to a positive subject, were one of the owners to be diagnosed with COVID-19 within 14 days — and willing to upload such data through the app.
The debate concentrated on what precise architecture to adopt to inform and shape the sought after anti-COVID-19 ADM systems, their pros and cons in terms of privacy and fundamental rights, but also of cybersecurity and (at least potential) effectiveness.
A game-changer was the introduction of “exposure notification” — a term adopted to avoid promoting the flawed idea that any apps could replace “contact tracing” altogether — APIs developed by tech giants Google and Apple, that are together responsible for the almost totality of operative systems installed on smartphones in commerce.
No location data would be collected, claimed the tech giants — even though a New York Times report argued on July 20, months after the protocol’s announcement in April, that Google still asked for location data to be turned on, even though not collected according to Mountain View, to actually be able to notify users via Bluetooth.
This at the same time helped the debate going forward and posed, once more, the issue of extremely powerful private multinational companies mandating technical rules that policymakers and State technologists could not shape in any ways, but only follow and obey to, thus suggesting that ADM responses to the global public health emergency can be more easily and effectively decided by Big Tech CEOs than by democratically elected governments.
All apps developed by countries that chose a Bluetooth-based architecture but refused to adopt the one by Google and Apple ran into serious technical issues due to limitations and requirements imposed by the tech giants, in fact, even leading to countries fully reconsidering their ADM deployment strategy. The UK, for example, decided to ditch its own centralised solution after it was proven to be able to recognise just about 4% of iPhones during a trial on the Isle of Wight. France even asked the Cupertino giant to relax some privacy features, so that its “StopCovid” app could work when in the background, an issue that has been consistently found to plague apps with the same architecture, most notably in Australia, where the app has been deemed “a terrible failure” — and “by any measure” so — in a Sidney Morning Herald editorial.
Apple refused to comply, and on May 26, top digital affairs officials from five EU governments (Germany, France, Italy, Spain and Portugal) strongly criticised both tech giants in a joint op-ed published in several languages. They argued that the imposition of an exposure notification standard from private entities was “the wrong signal when it comes to promoting open cooperation between governments and the private sector”, especially when one considers that “digital sovereignty” is arguably the core principle that informs the EU’s policy stance on digitisation strategies.
But this does not mean that GPS-based and “decentralised” Bluetooth-based apps are immune from bugs and glitches themselves. Qatar’s app, ‘EHTERAZ’, for example uses both GPS and Bluetooth technologies, and yet an Amnesty Tech Security Lab investigation found “a critical vulnerability” in it “that would allow malicious actors to access sensitive personal information, such as names, national ID number, health status, and location data, for more than a million users in the country”, wrote Access Now. This is all the more relevant given Qatar’s model of ADM systems deployment, according to which download of the app is compulsory for all users, while those who don’t comply “face a disproportionate penalty of up to three years in prison and a fine of approximately 55,000 USD”.
Even Google/Apple-based, decentralised apps are not immune from inaccuracies and bugs, potentially leading to high rates of false positives/negatives, and not even knowing “the number of people warned by the app”, as per a BBC investigation. This leads to serious questions not only in terms of efficacy, but of even being able to somehow measure that alleged efficacy.
A study published at the end of June by Trinity College researchers in Dublin adds further concerns. By applying the proximity detection rules adopted by the German, Swiss and Italian exposure notification apps to the context of public transportation (a commuter tram), authors Douglas J. Leith and Stephen Farrell concluded that “the Swiss and German detection rules trigger no exposure notifications, despite around half of the pairs of handsets in our data being less than 2m apart”.
As for the Italian one, it “has a true positive rate (i.e. correct detections of handsets less than 2m apart) of around 50%. However, it also has a false positive rate of around 50% i.e. it incorrectly triggers exposure notifications for around 50% of the handsets which are greater than 2m apart”. What it means is that its performance is “similar to that of triggering notifications by randomly selecting from the participants in our experiments, regardless of proximity”, authors conclude.
All of this leaves us with a question: what data do ADM systems need to actually help with contact tracing — if they can, at all?
Locations vs proximity: what data do ADM systems need to actually help with contact tracing?
The idea behind different contact tracing and exposure notification apps may be similar, but different decision-making processes are entailed by different app architectures.
By collecting location data, GPS-based apps can for example help health authorities reconstruct the web of contacts of an individual who tested positive to COVID-19, thus allegedly contributing to contact tracing efforts, by speeding them up and making them more effective and complete (logs are assumed here to be more reliable than human memory and judgment alone, here), while at the same time making it possible to realise that outbreaks are happening in precise spots and areas within a city or country.
Also, they can inform the understanding of trends in the population that are relevant for public health. For example, an intelligent analysis of the movements of a large number of people can reveal a population’s attitudes towards social distancing rules. GPS is also adopted in the enforcement of quarantine rules.
On the other hand, Bluetooth-based apps do not collect location data, and are instead based on proximity. Therefore, under this model of ADM, smartphones on which a contact tracing or “exposure notification” app has been downloaded regularly emit a Bluetooth Low Energy signal that contains a random, and temporary, key. This is used to create an encrypted log of all other phones equipped with the same app that qualify as “contact”, i.e. potentially expose other smartphone owners in the log to infection, were one of them to be diagnosed with the COVID-19 disease.
This functionality is common to both centralised and decentralised Bluetooth-based apps. But then, the process differs. In a centralised app, such as Australia’s ‘COVIDSafe’ app or in the UK’s discontinued one, when individuals test positive for COVID-19 they may be asked by health authorities to share data collected through the app in a central database, that is then used to compute which contacts qualify as such in terms of potential coronavirus exposure, and actually track them down.
Instead in a decentralised model, such as Italy’s ‘Immuni’ or Germany’s ‘Corona Warn-App’, such computation is performed on each individual phone. If willing to share such information through the app, all “contacts” of a positive subject are then warned through the system, and they may decide to seek medical attention as a result.
This means that, contrarily to centralised apps, health authorities are not given the whole chain of contacts of an infected individual, even if anonymously. This is why such apps are correctly framed not as contact tracing apps, but as “exposure notification” apps: they only notify potential exposure to contagion to an individual, but authorities have no way to know who that individual is, who they have been in contact with over the last two weeks, and most importantly where, using app data only. As the BBC writes, these apps “operate more as a warning system” than a contact tracing system.
Difficult trade-offs need to be considered when evaluating pros and cons of such models of automated decision-making. GPS systems, for example, are much more invasive in terms of privacy and human rights, but at the same time may provide much more information that can be useful in tackling future outbreaks. Bluetooth systems, on the other hand, are less invasive, but arguably less useful, certainly less ambitious, and actually effective only when downloaded by large parts of the population and in combination with extensive, and readily deployed, testing programmes: what good is a notification that warns you of potential infection, otherwise?
Does ADM in contact tracing and exposure notification work at all?
Many pundits, institutions and civil society organisations weighed in on what data would actually be needed to at the same time maximise effectiveness and minimise the burden on fundamental rights. Answers are still provisional, as — even months after the first deployments — we still lack hard evidence on the effectiveness of all such ADM systems.
As a systematic review of the literature published in Lancet on August 19 concluded after analysing 110 full-text studies, “no empirical evidence of the effectiveness of automated contact tracing (regarding contacts identified or transmission reduction) was identified”.
In fact, what we do know casts several doubts over their efficacy, putting the initial enthusiasm around tech-based solutions to the pandemic to rest, and actually calling into question the very decision of deploying them in the first place.
An American Civil Liberties Union (ACLU) White Paper, for example, after having described all possible types of data collection (GPS, cell tower location data, Wi-Fi, Bluetooth and QR codes) concluded in April that “none of the data sources discussed above are accurate enough to identify close contact with sufficient reliability”.
GPS technology has “a best-case theoretical accuracy of 1 meter, but more typically 5 to 20 meters under an open sky”. Also, “GPS radio signals are relatively weak; the technology does not work indoors and works poorly near large buildings, in large cities, and during thunderstorms, snowstorms, and other bad weather”.
This is especially important for ADM: “Even if we were to imagine a set of location data that had pinpoint accuracy”, writes the ACLU paper, “there would still be problems translating that in any automated way into reliable guesses about whether two people were in danger of transmitting an infection”. Case in point is Israel, where “one woman was identified as a “contact” simply because she waved at her infected boyfriend from outside his apartment building — and was issued a quarantine order based on that alone”.
Cell tower location data also is not precise enough, especially in rural areas, and even China had to abandon it after trials did not return the desired results.
As for Bluetooth, even its own creators, Jaap Haartsen and Sven Mattisson, argued for caution: problems in terms of accuracy and “uncertainty in the detection range” are very real, they told the Intercept, “so, yes, there may be false negatives and false positives and those have to be accounted for”.
Skepticism is confirmed in another study by Trinity College’s Leith and Farrell, in which the researchers found that “the strength of the received Bluetooth signal can vary substantially depending on whether people walk side by side, or one behind the other. Whether they carry their phone in their back or front pocket, where they place it within their handbag, and so on.” This might not be a problem for Bluetooth in general, but it surely becomes one in the context of containing the outbreak of the virus.
According to the paper, walls and furniture, especially metal objects such as shelves and fridges in a supermarket shopping aisle, or train or bus, could have “a significant effect” on this crucial component of decentralised, Bluetooth-based ADM systems, affecting the very core of their potential contribution in addressing the pandemic: notifying users that are actually in danger of contracting the COVID-19 disease.
“For example”, argued prof. Leith speaking with the Irish Times, “for two people walking around a large supermarket we found that the Bluetooth signal strength is much the same when they walk close together and when they walk 2m apart. When sitting around a meeting table with phones in their pockets we measured the signal strength to be very low, even for people sitting next to one another.” This fundamentally challenges the idea that exposure notification apps should be a pillar of effective broader contact tracing efforts.
The study therefore called for extensive testing prior to deployment of any such ADM systems, but — as previously noted — this is a complex endeavour in itself, as we don’t really know even how to define and measure “success” and “effectiveness” of such apps.
What we know, both from actual deployments so far and available literature, seems to confirm this fundamental confusion over such crucial metrics. Can we define an app “successful” based on its actual downloads and active users? Would this mean that Germany’s app, downloaded by more than 14 million citizens in just the two first weeks after launch, is a success story even if, for the first five weeks, it has been shown not to work properly in the background of millions of Android-based Samsung and Huawei smartphones?
Also, some countries made download of such apps mandatory, making comparisons moot. In fact, this would make India’s Arogya Setu controversial GPS+Bluetooth-based app the most successful in the world, having been made mandatory for certain social categories, and therefore downloaded by some 127 million citizens in around 100 days — by far the most “popular” in the world. Does this mean that we should justify its many privacy and cybersecurity issues?
And what to make of the Indian app’s developer claim of a 24% rate of effectiveness? How to actually make sense of — and audit, really — the assertion that “24% of all the people estimated to have Covid-19 because of the app have tested positive”? Is any percentage above zero a success?
Questions concern how to even measure these variables in a decentralised, Apple/Google-based, app: how to evaluate whether these apps actually work, when it is impossible for authorities to reconstruct who received an “exposure notification” through the app, in what contexts, and to what results?
Thermal scanners, face recognition, immunity passports: should this be our new normal?
The COVID-19 pandemic is severely affecting the economy on a world-wide scale. But the virus did not spell disaster for all commercial sectors. Some, on the contrary, are profiting from it.
Take the latest forecasts for the thermal scanning and facial recognition technology markets, whose items are being aggressively repurposed and marketed as indispensable “anti-COVID” tools by a growing number of technology firms and startups, and deployed in supermarkets, theatres, cinemas, hospitals, stadiums, banks, public offices and of course private businesses.
“The global thermal-imaging market is estimated to grow to $4.6 billion by 2025 from $3.4 billion this year due to the coronavirus pandemic”, Reuters reported at the beginning of July 2020. This means that “there are now 170 companies selling fever detection technology meant to detect people potentially suffering from the coronavirus, up from fewer than 30 companies selling similar technology before the pandemic”, reported OneZero, citing an IPVM directory. Five out of the six biggest players are Chinese: Sunell, Dahua, Hikvision, TVT, and YCX.
The market for face and voice biometrics is also about to significantly expand thanks to the pandemic, with an expected leap to $22,7 billion in 2027 from a current estimate of 7,2 billion, more than tripling in value in just five years, according to Global Industry Analysts estimates.
This is both unsurprising and surprising. Unsurprising, given that face recognition is being widely adopted and deployed, both inside and outside the EU, with little to no meaningful democratic debate and safeguards in place.
Bur surprising also, given what we know about their scant usefulness in the battle against COVID-19. As a recent, and groundbreaking, US National Institute of Standards and Technology (NIST) study has argued, contrarily to what several developers claimed in PR material over the course of the pandemic, “wearing face masks that adequately cover the mouth and nose causes the error rate of some of the most widely used facial recognition algorithms to spike to between 5 percent and 50 percent”.
Doubts abound around the accuracy and actual usefulness of thermal scanners too. According to the Electronic Frontier Foundation, thermal cameras “threaten to build a future where public squares and sidewalks are filled with constant video surveillance—and all for a technology that may not even be effective or accurate at detecting fevers or infection”.
More precisely, “experts are now concluding that thermal imaging from a distance—including that in camera systems that claim to detect fevers—may not be effective. The cameras typically only have an accuracy of +/- 2 degrees Celsius (approximately +/- 4 degrees Fahrenheit) at best”. Also, “human temperatures tend to vary widely, as much as 2 degrees Fahrenheit. Not only does this technology present privacy problems, but the problem of false positives can not be ignored. False positives carry the very real risk of involuntary quarantines and/or harassment”.
Even perfect accuracy would not be enough in the context of fighting the COVID-19 pandemic, as many infected individuals are asymptomatic or have symptoms that are “mild enough to avoid triggering a “fever detecting” camera. For example, one might be positive to COVID-19 and not have a fever at all.
These issues are true of “AI” solutions for the pandemic more broadly. As a study (not peer-reviewed at the time of writing) conducted by the WHO with universities in New York, Durham and Montreal concluded, “there is a broad range of potential applications of AI covering medical and societal challenges created by the COVID-19 pandemic; however, few of them are currently mature enough to show operational impact”.
Some countries are experimenting with immunity passports too — from Estonia to the UK. The rationale for their adoption, and the case for urgently doing so, is the same: when adopted as a digital “credential”, as per Privacy International, an individual becomes able to prove his health status (positive, recovered, vaccinated, etc.) whenever needed in public contexts, thus enabling governments to avoid further total lockdowns.
And yet, the London-based NGO warns that, similarly to all the tools previously described, “there is currently no scientific basis for these measures, as highlighted by the WHO. The nature of what information would be held on an immunity passport is currently unknown.”
What is already known, however, is that using immunity passports would entail several “social risks”, serving “as a route to discrimination and exclusion, particularly if the powers to view these passports falls on people's employers, or the police”.
A common theme emerges around all such tools: while marketed as necessary tools in “going back to normal”, what they do in reality is trying to impose — with no evidence whatsoever as to their effectiveness — a new normal based on pervasive and health-based surveillance. This socio-technical apparatus — as shown in many examples already, most notably in the Chinese city of Hangzhou — may be born out of a public health emergency, but is definitely here to stay, adding to the already concerning arsenal of surveillance devices deployed before the SARS-CoV-2 outbreak.
Whether this “new normal” actually helps containing the spread of the COVID-19 disease is a question that can only be addressed at a later stage of the pandemic, when actual results of implementations of such ADM systems will (hopefully) become available, and with further and much more in-depth research. A rigorous approach on how to measure and evaluate such results will also need to be developed in the meantime, however — if possible, at all.
But no matter the evidence, a general conclusion can already be drawn from this early foray into the status of ADM in tackling public health emergencies: rushing to novel technological solutions to as complex a social problem as a pandemic can result both in not solving the social problem at hand, and in needlessly normalising surveillance technologies. These systems should, instead, be widely and openly discussed before adoption, if they truly are to be compatible with our fundamental rights and democracy.
Download the full report as PDF (1 MB).