update

“Ethical AI guidelines”: Binding commitment or simply window dressing?

By Veronika Thiel

In April this year, we began to compile an overview of guidelines for so-called “ethical AI”. Thanks to contributions of our readers, our inventory has grown to 83 entries and is finding global resonance. It is therefore time for an interim overview: there are some substantial contributions, but many entries are lacking substance. Additionally, there is a dearth of oversight mechanism to ensure that these guidelines are adhered to.

The discussion around automated decisions and their impact on society frequently focuses on the question if “Artificial Intelligence” should be regulated, and if so, how and by whom. Voluntary self-regulation is often mentioned as a solution. Proponents point to existing standards and commitments – but no-one knows what these standards are and if they are adhered to. Therefore, we looked for specific examples to document what is happening in the field of “Ethics of Algorithms”: who is developing what, and what commitment do these initiatives require?

We published a first overview on the 9th of April 2019. The echo was positive, and shows its importance is: firstly, it’s already mentioned in academic research, and secondly, it has grown to 83 entries. Now is therefore a good time to present some interim findings.

The most important insight: as already pointed out in our initial publication, there are very few initiatives with governance or oversight mechanisms that ensure and enforce the compliance of voluntary commitments.

Most documents are recommendations, presentation of principles, or guidelines. Of the 21 examples that can be labelled as voluntary commitments, quality seal or similar, only three mention some sort of oversight mechanism or quality control. This does not mean that the others have none, but they are not explained in publicly accessible material. Without oversight mechanism however, there is little incentive to adhere to ethical principles in the development of automated decision making systems.

 

„Ethical AI“-initiatives with binding character and information about oversight or control mechanisms
No information available 18
Some information available 1
Information available 2
Total 21

 

As voluntary self-regulation is a popular means to avoid regulation, it is unsurprising that most initiatives in the inventory are industry-led. Companies such as SAP, Sage, Facebook, Google and many others have either developed internal principles, or published general guidelines, partially as member of alliances (for example the Partnership on AI) or led by industry associations.

 

Initiator Number of initiatives 
Private sector 22
Civil Society 18
Government (incl EU) 18
Academic sector 8
Industry associations 6
Others 4
Professional associations 4
International Organisations 3
Total 83

 

One third of entries claims a global scope. Aside of private sector alliances, global civil sector institutions (e.g. the UNI Global Union) and cross-sectoral organisations such as the Future of Life Institute have also published sets of principles. The high number of German entries is likely owing to language barriers. We hope that with growing levels of awareness of the inventory, we will receive further entries from other countries.

 

Country Number of entries
Germany 12
USA 6
Canada 5
UK 4
China 3
France 3
Denmark 2
Singapore 2
Dubai 1
Japan 1
Netherlands 1
South Korea 1
Spain 1
Global 32
Regional/Supranational Organisations
EU 5
OECD Countries 1
G20 1
Latin America 1
UNESCO 1
Total 83

 

We are sure that there are more initiatives, and therefore ask for further submissions.

We did not undertake a content analysis. However, at a glance, all initiatives mention fairness, accountability, ethics and transparency. They vary in length from five bullet points to detailed publications with suggestions on how to put principles into practice.

The most important insight remains the lack of oversight and control. Many initiatives are new – it is possible that they will develop further. However, some think that these publications do not represent a sincere effort, but are merely window dressing to evade governmental regulation. The discussion is reminiscent of Corporate Social Responsibility (CSR). It took decades for CSR to at least partially lose the reputation of green- or whitewashing, and to set global standards that many companies adhere to.

The shape that this discourse on the appropriate use of systems using “Artificial Intelligence” will take depends strongly on the motives that drive the companies and organisations that develop them.

 


Photo by Pablo García Saldaña on Unsplash

Published: June 20, 2019
Category: update

Leave a Reply

Your email address will not be published. Required fields are marked *

Supported by