Michele Loi (he/him)

Senior Scientific Advisor

Photo: Julia Bornkessel, CC BY 4.0

Dr. Michele Loi is a Senior Scientific Advisor at AlgorithmWatch and adjunct professor (“professore a contratto”) for the course of AI, Ethics and Law at the University of Milan, following his role as Marie Sklodowska-Curie Fellow at Politecnico di Milano researching fairness in AI. With a PhD in Political Theory from LUISS University and extensive postdoctoral experience at institutions including University of Zurich, ETH Zurich, and the World Health Organization, Dr. Loi has established himself as a leading interdisciplinary researcher at the intersection of AI ethics, technology governance, and social philosophy, with several high impact publications. His research portfolio includes principal investigator roles on multiple projects examining algorithm fairness, data ethics, and technology governance, with over 35 peer-reviewed articles, 9 conference proceedings, 3 books, and 7 book chapters. His publication record demonstrates particular expertise in algorithmic fairness, ethical implications of AI systems, and the social impacts of technology, making him qualified to develop methodological frameworks for analyzing the systemic risks of AI through both technical and socio-political lenses.

Articles for AlgorithmWatch

Surreal digital artwork showing multiple hands of diverse skin tones reaching upward from beneath turbulent, translucent green-blue water. Above the water's surface, a vintage beige computer monitor displaying a white spiral loading symbol floats in a light gray sky, creating a metaphorical scene about technology, connectivity, or digital accessibility.

Position Paper

Focus Attention on Accountability for AI − not on AGI and Longtermist Abstractions

Many tech CEOs and scientists praise AI as the savior of humanity, while others see it as an existential threat. We explain why both fail to address the real questions of responsibility.

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Investigative journalism and algorithmic fairness

Investigating systems that make decisions about humans, often termed algorithmic accountability reporting, is becoming ever more important. To do it right, reporters need to understand concepts of fairness and bias – and work in teams. A primer.

Automated Decision-Making Systems in the Public Sector – An Impact Assessment Tool for Public Authorities

How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch and AlgorithmWatch Switzerland developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Towards accountability in the use of Artificial Intelligence for Public Administrations

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.

“We must save privacy from privacy itself”

People Analytics must benefit the people

An ethical analysis of data-driven algorithmic systems in human resources management, by Michele Loi