Michele Loi (he/him)

Senior Research Advisor

Photo: Julia Bornkessel, CC BY 4.0

Michele Loi, Ph.D., is Marie Sklowdoska-Curie Individual Fellow at the Department of Mathematics of the Politecnico Milan with a research project on Fair Predictions in Health. He is also co-principal investigator of the interdisciplinary project Socially Acceptable and Fair Algorithms, funded by the Swiss National Science Foundation, and has been Principal Investigator of the project "Algorithmic Fairness: Development of a methodology for controlling and minimizing algorithmic bias in data based decision making", funded by the Swiss Innovation Agency.

Articles for AlgorithmWatch

A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Investigative journalism and algorithmic fairness

Investigating systems that make decisions about humans, often termed algorithmic accountability reporting, is becoming ever more important. To do it right, reporters need to understand concepts of fairness and bias – and work in teams. A primer.

Automated Decision-Making Systems in the Public Sector – An Impact Assessment Tool for Public Authorities

How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch and AlgorithmWatch Switzerland developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Towards accountability in the use of Artificial Intelligence for Public Administrations

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.

“We must save privacy from privacy itself”

People Analytics must benefit the people

An ethical analysis of data-driven algorithmic systems in human resources management, by Michele Loi