The World Health Organization published a guidance on 'Ethics and governance of artificial intelligence for health'. While not dedicated to the role of AI in the pandemic, the document does provide some useful indications around how it should be used in order to be effective, at a time in which its overall effectiveness is increasingly put into question.
The document, the product of the discussions between 20 experts in health, ethics, human rights and technology, argues that AI "holds great promise" for health, including in response to the COVID-19 pandemic. It could, for example, potentially "support pandemic preparedness and response, inform the decisions of health policy-makers", and even "enable resource-poor countries, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services."
At the same time, the WHO guidance warns that "numerous new applications have emerged for responding to the pandemic, while other applications have been found to be ineffective."
Many uses were hypothesized, and as many AI-based tools trialled and deployed over the course of the pandemic, but their negatives were more apparent:
"Several applications have raised ethical concerns in relation to surveillance, infringement on the rights of privacy and autonomy, health and social inequity and the conditions necessary for trust and legitimate uses of data-intensive applications".
AI is being used for both detection and prediction, during the pandemic, the document writes. AI was deployed to assess its current state. It provided forecast models based on real-time movement and location data. It tried to facilitate the development of vaccines and treatments. More generally, and consistently with a broader trend in the health sector, "the possible uses of AI for different aspects of outbreak response have also expanded during the COVID-19 pandemic."
But again, the "promise" was not kept, at least so far:
"While many possible uses of AI have been identified and used during the COVID-19 pandemic, their actual impact is likely to have been modest; in some cases, early AI screening tools for SARS-CoV2 "were utter junk" with which companies "were trying to capitalise on the panic and anxiety""
Part of the reason may lie in the fact that "AI systems based on machine learning require accurate training, while data are initially scarce for a new disease such as COVID-19". This is apparent in how AI was deployed to diagnose COVID-19 from chest x-rays: a "significant proportion" of such systems, argued Cambridge University researcher, Michael Roberts, in The New Scientist, "were trained on adults with covid-19 and children without it, so their algorithms were more likely to be detecting whether an X-ray came from an adult or a child than if that person had covid-19."
Issues however abound, and already provide at least some elements for the "assessment" required by the WHO guidance in order to understand whether AI is "accurate, effective and useful" in the context of the pandemic. For example, Roberts and colleagues wrote a research paper reviewing "hundreds" of papers (published between January and October 2020) that claimed that machine learning can help to diagnose COVID from chest scans. And yet, what they found is deeply troubling:
"none of them produced tools that would be good enough to use in a clinical setting."
And "something has gone seriously wrong when more than 300 papers are published that have no practical benefit", Roberts added.
Methodological critiques are possibly even harsher:
"Our review found that there were often issues at every stage of the development of the tools mentioned in the literature. The papers themselves often didn’t include enough detail to reproduce their results."
Experts consulted and evidence reviewed by MIT's Technology Review basically corroborate these findings, even concluding that while "many hundreds of predictive tools were developed" to tackle COVID-19, "none of them made a real difference, and some were potentially harmful".
Hype and "unrealistic expectations" around machine learning and AI in response to the pandemic led to the mostly unchecked and rushed development and deployment of tools whose efficacy had no evidence whatsoever, the article argued.
In some cases, secrecy was contractually imposed, thus preventing independent scrutiny. According to epidemiologist Laure Wynants, consulted by Technology Review, this meant that "some hospitals are even signing nondisclosure agreements withmedical AI vendor." And "when she asked doctors what algorithms or software they were using, they sometimes told her they weren’t allowed to say."
If, as prof. Jason H. Moore put it in an op-ed in Scientific American, "AI is at an inflection point in health care", COVID-19 only seems to have made the need for claims grounded in evidence (rather than hype) and reform more urgent.
In order to maximise AI's potential in regard to public health, the WHO guidance proposes the implementation of six principles:
- Protect human autonomy ("humans should remain in full control of health-care systems and medical decisions")
- Promote human well-being, human safety and the public interest (no harm to humans)
- Ensure transparency, explainability and intelligibility (which could be highly problematic, as algorithms deployed in health, argues prof. Moore, "are rarely able to provide an answer to the question of why they think an answer is a good one")
- Foster responsibility and accountability (e.g., through "transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance")
- Ensure inclusiveness and equity ("AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society")
- Promote artificial intelligence that is responsive and sustainable.
Very broad principles, met with very specific and complex issues in actual deployments. Whether they can be solved is yet another "promise" to be fulfilled.