Towards accountability in the use of Artificial Intelligence for Public Administrations

Michele Loi und Matthias Spielkamp analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.

Matthias Spielkamp
Executive Director, Co-Founder & Shareholder

In their analysis, Loi and Spielkamp argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government.

Read more

group photo of employees

Donate

Michele Loi (he/him)

Senior Research Advisor

Photo: Julia Bornkessel

Michele Loi, Ph.D., is Marie Sklowdoska-Curie Individual Fellow at the Department of Mathematics of the Politecnico Milan with a research project on Fair Predictions in Health. He is also co-principal investigator of the interdisciplinary project Socially Acceptable and Fair Algorithms, funded by the Swiss National Science Foundation, and has been Principal Investigator of the project "Algorithmic Fairness: Development of a methodology for controlling and minimizing algorithmic bias in data based decision making", funded by the Swiss Innovation Agency.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.