
Summary: Could AI Chatbots influence a Government’s Decisions?
What does it mean for democracy if our political leaders and government officials allow AI to shape their decisions?

Dr. Michele Loi is a Senior Scientific Advisor at AlgorithmWatch and adjunct professor (“professore a contratto”) for the course of AI, Ethics and Law at the University of Milan, following his role as Marie Sklodowska-Curie Fellow at Politecnico di Milano researching fairness in AI. With a PhD in Political Theory from LUISS University and extensive postdoctoral experience at institutions including University of Zurich, ETH Zurich, and the World Health Organization, Dr. Loi has established himself as a leading interdisciplinary researcher at the intersection of AI ethics, technology governance, and social philosophy, with several high impact publications. His research portfolio includes principal investigator roles on multiple projects examining algorithm fairness, data ethics, and technology governance, with over 35 peer-reviewed articles, 9 conference proceedings, 3 books, and 7 book chapters. His publication record demonstrates particular expertise in algorithmic fairness, ethical implications of AI systems, and the social impacts of technology, making him qualified to develop methodological frameworks for analyzing the systemic risks of AI through both technical and socio-political lenses.

What does it mean for democracy if our political leaders and government officials allow AI to shape their decisions?

What does it mean for democracy, if our political leaders and government officials allow AI to shape their decisions?

Many tech CEOs and scientists praise AI as the savior of humanity, while others see it as an existential threat. We explain why both fail to address the real questions of responsibility.

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are currently being conducted.

Investigating systems that make decisions about humans, often termed algorithmic accountability reporting, is becoming ever more important. To do it right, reporters need to understand concepts of fairness and bias – and work in teams. A primer.

How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch and AlgorithmWatch Switzerland developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.


An ethical analysis of data-driven algorithmic systems in human resources management, by Michele Loi