Ethical guidelines issued by engineers’ organization fail to gain traction

The world’s largest professional association of engineers released its ethical guidelines for automated systems last March. A review by AlgorithmWatch shows that Facebook and Google have yet to acknowledge them.

Nicolas Kayser-Bril
Reporter

In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers' association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You're Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came.

Twitter, which in the past few years has been facing weak growth and a public backlash on the abuse some of its users face, wrote that they agreed with the IEEE principles. Their change of course towards more accountability, notably the introduction of a button that lets users choose between an algorithm-curated and a chronological news feed in December 2018, predates the publication of the IEEE ethical principles.

Wrong questions

Asking why Google and Facebook show no sign of implementing them is the “wrong question” to ask, said Konstantinos Karachalios, managing director of IEEE in a telephone interview. For him, the general principles are not a standard like WiFi, a technical standard that can be clearly and rapidly implemented. Instead, they aim at educating the profession and at “shaping policy.” In this regard, Ethically Aligned Design was a great success, he said. The OECD, a club of rich countries, published principles on artificial intelligence which were closely based on IEEE’s. Among other public-sector organizations in many countries, the German federal commission on data ethics also benefited from IEEE’s expertise, according to Mr Karachalios.

While changing policy is a laudable goal, the social impact of automated systems depends on the concrete steps taken by tech corporations to give – or take – agency from their users. AlgorithmWatch will keep asking the wrong questions.

The letters to Facebook, Google and Twitter, sent by AlgorithmWatch (to open PDF in new tab click here):

Additional research: Veronika Thiel

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.