Today, we launch our AI Ethics Guidelines Global Inventory.
The way automated decision-making (ADM) systems should be regulated is hotly disputed. Recent months have seen a flurry of frameworks and principles that seek to set out principles of how ADM systems can be developed and implemented ethically. There are governmental initiatives (such as the report by the House of Lords in the UK) and supra-national efforts (such as the EU commission’s High Level Expert Group – see our comments on the draft here and a report on the final version here). The private and civil sectors have also not been idle.
With our AlgorithmWatch AI Ethics Guidelines Global Inventory, we started to map the landscape of these frameworks. They vary in content and detail, but we found a few common traits:
- All include the similar principles on transparency, equality/non-discrimination, accountability and safety. Some add additional principles, such as the demand for AI be socially beneficial and protect human rights.
- Most frameworks are developed by coalitions, or institutions such as universities that then invite companies and individuals to sign up to these.
- Only a few companies have developed their own frameworks.
- Almost all examples are voluntary commitments. There are only three or four examples that indicate an oversight or enforcement mechanism.
- Apart from around 10 documents, all were published in 2018 or 2019 (some did not have a date).
- Overwhelmingly, the declarations take the form of statements, e.g. “we will ensure our data sets are not biased”. Few include recommendations or examples of how to operationalise the principles.
Our overview is certainly not complete; it is supposed to be a starting point. We assume there exist many more such guidelines that we are not aware of – for our lack of language skills, or simply because they have not been announced widely. This is why we invite all of you to submit further examples we have missed.
Visit the AI Ethics Guidelines Global Inventory and use the form to submit.