In the realm of paper tigers – exploring the failings of AI ethics guidelines
AlgorithmWatch upgraded its AI Ethic Guidelines Global Inventory by revising its categories and adding a search and filter function. Of the more than 160 guidelines we compiled, only a fraction have the necessary enforcement mechanisms in place and the overwhelming majority comes from Europe and the US.
by Leonard Haas and Sebastian Gießler, with additional research by Veronika Thiel
Last year, AlgorithmWatch launched the AI Ethics Guidelines Global Inventory to compile frameworks and guidelines that seek to set out principles of how systems of how systems for automated decision-making (ADM) can be developed and implemented ethically. We have now upgraded this directory by revising its categories and adding a search and filter function. With the support of many contributors we have compiled more than 160 guidelines. Our assessment reveals that only a fraction have adequate enforcement mechanisms in place and the overwhelming majority comes from Europe and the US.
An initial evaluation in 2019 already showed that AI ethics guidelines lacked enforcement mechanisms and that there was a strong appetite from our audience to continue watching that space. Over the last year, the number of guidelines continued to grow. The private sector, government initiatives and civil society are more or less evenly represented.
Of the more than 160 documents in our database, only ten have practical enforcement mechanisms. Both private sector and public sector policies are mostly voluntary commitments or general recommendations. Strikingly, the private sector relies heavily on voluntary commitments, while state actors mainly make recommendations for administrative bodies. Many guidelines contain wording that plays down the scope of the document, presenting them as an orientation aid or proposal.
Due to the multitude of fields of application, goals and actors, many different approaches, hybrids and combinations exist. This heterogeneity complicates the classification into distinct categories. The scope of the ethical guidelines in the database ranges from individual pages with briefly outlined principles, to detailed publications, and the range of publishers extends from young tech start-ups to international organizations. But even the ethical guidelines of the world’s largest professional association of engineers, IEEE, largely fail to prove effective as large technology companies such as Facebook, Google and Twitter do not implement them, notwithstanding the fact that many of their engineers and developers are IEEE members.
Most of the guidelines that we have included in the inventory come from the wealthy countries of the OECD. Voices from the global South have so far been poorly represented.
The majority of the guidelines examined are limited to vague formulations, and in most cases there are no enforcement mechanisms at all. Often it is a matter of weaving together principles without a clear view on how they are to be applied in practice. Thus, many guidelines are not suitable as a tool against potential harmful uses of AI-based technologies and will probably fail in their attempt to prevent damage. The question arises whether guidelines that can neither be applied nor enforced are not more harmful than having no ethical guidelines at all. Ethics guidelines should be more than a PR tool for companies and governments.
Visit the AI Ethics Guidelines Global Inventory
Photo by Eric Kilby on flickr (CC-BY)