by Fabio Chiusi
In an unprecedented global social experiment in health surveillance, a plethora of automated decision- making (ADM) systems — including systems based on artificial intelligence (AI) — were deployed during the COVID-19 pandemic. They were supposed to tackle fundamental public health issues. Nonetheless, too often, they were adopted with almost no transparency, no evidence of their efficacy, no adequate safeguards, and insufficient democratic debate.
This report is the result of yearlong monitoring of the rollout and use of such systems, documented in our Tracing The Tracers project.
Many of our findings are consistent with those high- lighted in previous AlgorithmWatch publications, such as the ‘Automating Society 2020’ and ‘ADM Systems in the COVID-19 Pandemic: A European Perspective’. Opacity, lack of evidence, oversight, and substantial debate about the deployment of ADM systems pre- ceded the pandemic. The response to COVID-19 only served to confirm these trends. Now, the situation is even worse than before COVID-19 because ADM systems include potentially life-saving tools.
Technological solutionism — i.e., reducing complex social issues to technological issues in need of a technological solution — emerged as the clear ideological background to most deployments of ADM pre-COVID. This kind of solutionism was on full display again during the pandemic and strongly influenced both public health policymaking and public perceptions.
Some ADM tools (digital contact tracing (DCT) apps and digital COVID certificates (DCCs)) have been hotly debated, but not in an evidence-based fashion and mostly based on contradictory, faulty, and incomparable methods, and results. And, while the use of these tools is partly acceptable in an emergency, too often it resulted in unfounded promises and marketing-oriented hype.
When it comes to the contribution of ADM systems in tackling some of the most pressing issues of the pandemic — containing infections, alleviating pressure on hospitals, allowing safe travel and social gatherings, prioritizing vaccines for those most in need — a somewhat mixed and provisional judgment must be given. Even though some ADM-based responses did help tackle COVID-19, there is so far no sufficient scientific evidence to back the conclusion that DCT apps, DCC schemes, AI, and/or algorithms have been central, fundamental, or even necessary to effectively respond to the pandemic.
DCT apps and DCC schemes — particularly in their “Green Pass” version for domestic uses — proved controversial, including per their effectiveness. How- ever, the contribution of AI-based ADM approaches adopted in response to COVID-19 was, arguably, even more controversial and polarizing. Some argued in favor of an enthusiastic, solutionist future for public health thanks to AI. Others were more pragmatic and showed that, so far, the actual results produced by AI have been wildly overblown, subject to hype, and even exploited in dangerous, Cold War-style propaganda among conflicting superpowers.
Extremes abounded. On the one hand, dystopian applications of ADM have been repeatedly tested and/or deployed throughout the pandemic — mostly outside of Europe, according to our (incomplete) survey of the field. On the other hand, some promising applications have been adopted, including those used in vaccine research, distribution, and prioritization, in providing early COVID-19 diagnosis, assessing the risk of severe outcomes, creating “smart”, efficient testing strategies, and assisting doctors with decision-making. Some of these uses might have saved lives, even though it is impossible to quantify their contribution.
As millions of lives were — and still are — at stake, a blunt, ideologically-based assessment should not be allowed in any informed discussions on the actual contribution of ADM systems in the fight against COVID-19.
The single, most worrying trend the Tracing The Tracers project was able to document throughout the pandemic was how it was exploited as an excuse to further entrench and normalize the surveillance, monitoring, measuring, and prediction of an increasing number of daily activities — now essentially including public and personal health purposes. This is even more concerning given the high degree of bugs, fakery, data leaks, and function creeps witnessed in the ADM tools deployed both in and outside of Europe. For example, some law enforcement authorities could access contact tracing data for criminal investigations.
In the ‘Automating Society 2020’ report, we concluded that the algorithmic status quo was untenable, and needed to change profoundly. Even though new normative frameworks in Europe and beyond are about to expand governance approaches to ADM systems on an enormous scale, so far, the pandemic has only perpetuated that status quo.
Now that ADM systems contribute to potentially lifesaving decisions, it is even more urgent to open the health surveillance ‘black box’. If we are to tackle both the current and future pandemics effectively and democratically, we must build a more transparent, evidence-based, and democratic algorithmic status quo.
To bring about a more democratic and evidence- based governance of ADM systems in the pandemic, some major trends need to be reversed, some cautions applied, and some basic principles respected. The Tracing The Tracers project highlighted the following elements, to be considered to facilitate a change toward a better algorithmic status quo:
Show us the evidence!
Now that the pandemic has raged for almost two years, there is no more justification for opaque impositions — with no clear end in sight — if there ever was one in the first place. Future ADM deployments must be evidence-based, transparent, clearly limited in scope and duration, and more democratically discussed. This will help remove abusive systems and make the most of those which promote public health.
Protect our rights!
The pandemic must not be treated as an excuse to normalize vague and undefined exceptions to principles of EU law and international human rights law in relation to the use of ADM systems, such as necessity, proportionality, data minimization, privacy, respect of human rights, fairness, and equity.
Technology (alone) is not a solution.
The pandemic is a complex issue, with enormous economic, societal, normative, technological, and public health consequences. Therefore, it should not be treated as an eminently — or worse, exclusively — technological issue, to be “solved” by a technological tool. Not all technological innovations can be put to good use in society. Some of them should be banned altogether, including during a public health emergency — for example, biometric recognition in publicly accessible space that amounts to mass surveillance.
Make sure mass health surveillance does not become the new normal!
While many outcomes of COVID-related ADM tools are controversial, one is not: the pandemic has further accelerated the ongoing process of normalizing pervasive, in some cases mass surveillance — even in democratic countries and the EU, where digital ID schemes, biometrics (at the borders), and tracking schemes risk composing a complex “surveillance infrastructure” that many see as problematic. There must be a clearly defined post-pandemic return to a normal in which mass surveillance is and remains banned from societies.
As EU decision makers, provide more leadership in the next pandemic, and learn from the cur- rent one!
The EU contributed fewer cases of dangerous COVID-related ADM systems to our database, compared to Asia and Africa, for example. However, the EU failed to properly govern important developments throughout the pandemic. EU guidelines and principles were needed and were welcomed when — as in the case of digital con- tact tracing apps and digital COVID certificates — they arrived. However, contact tracing apps were arguably standardized by Google and Apple more than by the EU. The EU’s interoperability efforts on a global standard for individuals to prove their COVID status digitally before traveling internation- ally were also needed and important, but domes- tic COVID certificates have been left to the whims of Member States. Precise rules and limits to AI- based applications are still absent.
Avoid an AI arms race!
This is especially true of the US and China. As Bloomberg notes, “by miscalculating the others’ abilities, both superpowers risk overestimating their adversary’s strengths and overcompensating in a way that could lead to a Cold War-style AI arms race.” Steering toward an evidence-based approach to ADM systems in public health and beyond can assist in avoiding yet another global conflict based on mistrust, manipulation and ideology.
Gather more evidence!
(Too) much is still not understood on how ADM systems impacted the pandemic. Further research is not only needed, but vital to better inform future public health responses. In this field, the work of academia and civil society is key, and should be supported well after the end of the COVID pandemic.