No red lines: Industry defuses ethics guidelines for artificial intelligence

Artificial intelligence should serve people and respect fundamental rights. This is what an expert group of the EU Commission demands in its now published guidelines for these technologies. However, unalterable ethical principles can no longer be found in it - representatives of industry on the committee have successfully deleted them.

German version by Chris Klöver and Alexander Fanta,
first published by Netzpolitik.org, translation by Kristina Penner

[Please also see our AI Ethics Guidelines Global Inventory, a list of Ethics Guidelines for ADM systems worldwide]

The EU Commission’s new guidelines for the ethical use of Artificial Intelligence should be a great success. For more than nine months, 52 experts have been working on the document. It is to become a kind of ethics handbook for people in politics, business and software departments. What principles do they have to observe to create "trustworthy AIs," as the guidelines call them? An AI that serves people? These are the questions the 40-page document is supposed to answer.

Reading it conveys a pleasant feeling. There is a lot of talk here about human dignity, technology that puts people first and serves the common good. In contrast to the mix the German government presented a few months ago as an AI strategy, the guidelines read as if humans had actually wrote them. They are refreshingly concrete and packed with many examples of how AI can become an advantage or a disaster for our society. It is about predictive policing and face recognition, about algorithms that disadvantage women or minorities, and about the fact that ethics is not something that can be ticked off on a list.

Red lines? Shall not exist in the EU

"It's the best and most substantial document we have on the planet so far on this subject," says ethics researcher Thomas Metzinger of the University of Mainz, who has contributed to the recommendations. Nevertheless, he is disappointed by the result. As is so often the case in political coordination processes, the devil is in the details. The empty spaces and small linguistic barbs, which can hardly be seen from the outside with the naked eye, are only visible to those who have witnessed how they got there.

What Metzinger and other members of the expert group said in interviews with netzpolitik.org is a lesson about the political processes in Brussels and the lobbying work of industry. For example, the red lines: He and his colleague Urs Bergmann, who develops algorithms for Zalando, were initially commissioned to define such red lines, says Metzinger – values that are not negotiable. Things that should definitely not be done in Europe using Artificial intelligence.

They organised several workshops and defined a number of such ethical restricted areas: Research on autonomous weapon systems, for example; Citizen Scoring, as it happens in China; automated identification of individuals by means of facial recognition; or AI systems that operate in disguise.

Only to see how the numerous industry representatives* on the Committee pushed for the term to be erased from the entire document. The word "non-negotiable" and the expression "red lines" no longer appear in the final version. Instead, it is about "concerns" and "tensions" that could arise about the document's guiding ethical principles and the need to weigh and document "cutbacks" that have been made. "There are no more non-negotiable ethical principles in this document," criticizes Metzinger. But one cannot negotiate everything, especially when it comes to fundamental rights.

Ethics made by Nokia, SAP & IBM

Of the 52 experts who have drafted the document, 23 come from companies such as Nokia, Google, Airbus or IBM. If you include the lobbying associations, there are 26 representatives from industry, half of the group. On the other hand, there are only four experts in ethics and ten organisations working on consumer protection and civil rights.

For Ursula Pachl of the European consumer organisation BEUC, this composition constitutes a problem. "From our point of view, the whole composition is too industry-dominated and does not adequately reflect the mandate." SAP, for example, is basically represented three times: by the AI expert Markus Noga, by the lobby association Digitaleurope – and once more by the chairman of the expert group Pekka Ala-Pietilä, who sits on the supervisory board of the software company. No academic expert on data protection, on the other hand, is a member of the group. "That's remarkable," says Pachl, "because AI is very closely linked to data usage and data protection."

In the discussions, industry participants* were keen to downplay the risks of AI, Pachl reports. First and foremost: Cecilia Bonefeld-Dahl, Head of Digitaleurope. It is probably the most important lobby association of the global IT industry in Brussels, an alliance of 65 large companies. Google, Facebook, Apple, Amazon and Microsoft are among them, but also national trade associations such as the German BITKOM, who pool their interests there. In the expert group, Bonefeld-Dahl has made sure that AI comes off particularly well, says Pachl – and that the term "red lines" is removed from the document. When asked by netzpolitik.org, the head of Digitaleurope did not want to comment on this.

Long-term risks: Not reading well

A section on the potential long-term risks of AI development has also disappeared completely. In that section, Metzinger and others had pointed out the danger of systems eventually becoming so intelligent that they would become independent, develop their own consciousness or even their own morality. These scenarios known from science fiction are very unlikely, concedes Metzinger. Nevertheless, such risks should not be ignored.

For other members of the group, this went too far. In particular, the ethicist Virginia Dignum and the philosopher Luciano Floridi, who most recently wanted to advise Google on its ethics board, had threatened to withdraw if the concerns remained in the document. Now all that is left of the potential long-term risks is a hint hidden in a footnote: "While some of us think that strong AI, artificial consciousness, artificial moral agents, super intelligence (...) could also be examples of such long-term risks, many others consider this unrealistic.”

All clean: Ethics as a performance

Metzinger refers to this practice as "ethics washing": Companies engage in an ethical debate with the goal of postponing or completely preventing legal regulations. The longer the debate continues, the longer it takes for actual laws to disrupt one's own business model. The new Institute for Ethics in AI, which is financed by Facebook in Munich, falls into this category, as do the guidelines that companies from Google to SAP currently impose on themselves when dealing with AI.

However, not only companies, says Metzinger, but also politics sometimes cultivated this theatre of ethics as a substitute for regulation. One inevitably has to think of the Data Ethics Commission, set up by the Ministry of Justice and the Ministry of the Interior last year, where 16 experts* are now discussing AI, among other things. Or the Digital Council, which also advises the government on the use of Artificial Intelligence. Or the Competition Commission 4.0, which is supposed to provide recommendations on the regulation of Facebook and other data brokers for the Ministry of Economic Affairs.

Remarkably concrete

The EU Commission's guidelines repeat much of what existing ethics checklists already recommend for "accountable algorithms" or "ethical design". Trustworthy AI applications should therefore be developed and used in such a way that they respect human autonomy, prevent harm, function fairly and intelligibly. They should also be implemented in a responsible manner, rationally weighing up the advantages and disadvantages for individuals, society and the environment.

What this document actually stands out in are the details. Where previous voluntary codes of practice merely mention buzzwords such as transparency, equality, data protection and responsibility, the authors* become remarkably concrete. They provide answers to the question: What does that mean? For instance, even before a technology is developed, an assessment must be carried out as to whether and how it affects fundamental rights such as freedom of expression. Or that systems should enable people to make informed decisions instead of manipulating them.

Chapter by chapter, the document becomes more tangible, until at the end clear recommendations are made on how to achieve these goals: with a specific blueprint for algorithms, for example, but also with completely non-technical means such as regulation and a development team as diverse as possible.

One sticking point for many experts* is, for example, the explainability of algorithmic decision-making processes: The algorithms that spit out a decision – such as whether someone receives a certain social benefit, a loan, or sees a certain YouTube video recommended  to them - are often obscure in their mode of operation. This makes it impossible for people to comprehend the decision and, in the case of errors, difficult to appeal it.

This can be changed if algorithms are designed differently, albeit at the cost of accuracy. The guidelines highlight this and demand: "If an AI system has a decisive impact on people's lives, it should be possible to demand an explanation of the decision-making process of such an AI." The guidelines also state: "If an AI system has a decisive influence on people's lives, it should be possible to demand an explanation of the decision-making process of such an AI.” How this is to be done while research is still working on an acceptable solution to this problem remains unclear. As Sabine Theresia Köszegi, professor for labor science and organization at the TU Vienna, says, the work on the report has above all made clear that the problem of explainability of AI systems needs further research.

Everything’s optional, nothing's a must

So there is no lack of principles and recommendations. The only snag is: The EU Commission's ethical guidelines are not binding either. They merely serve as an orientation for companies that wish to commit themselves – a friendly offer, so to speak. Anyone who violates them does not have to reckon with any legal consequences. Is that enough? The ethicist Metzinger says: no. "We need rules that are binding, enforceable and democratically fully legitimised," he says. "It's not just about ethics.”

BEUC’s Ursula Pachl also criticizes: "That's far too little." The crucial debate must revolve around legislation and the question of whether the status quo is sufficient to protect consumers even in an AI-driven market. It is about questions of product safety, data protection, but also about liability rules. If several companies are involved, who is responsible if something goes wrong? How can users defend themselves against automated decisions? Do they have a right to know whether they are talking to a machine or a person?

These questions are also the topic of the expert group, because the guidelines are only half of their mandate. The next step will be to make concrete recommendations to the EU Commission on how the use of AIs should be regulated and how research funds should be distributed. The EU has announced that it would invest 20 billion euros, so a lot of money is at stake. And as in all previous phases, the expert group will be under great time pressure: The Commission wants the proposals to be ready by June at the latest.

Ursula Pachl, together with tech lobbyist Bonefeld-Dahl, is jointly responsible for drafting the proposals. Her prognosis: It will be difficult. Already in the preliminary talks, it became heated. Digitaleurope sees little need for stricter laws. At the moment, however, the work is completely on hold anyway: fearing the ethical guidelines would not be ready in time, the discussion on laws was interrupted last November. Digitaleurope and its companies are well served by this. They now have even more time before the EU imposes binding rules on them, and possibly even penalties in case of non-compliance.

First draft for worldwide standards

The guidelines are already much better than anything that China and the US have ever produced, says ethicist Thomas Metzinger. However, they could only be the first draft for a global agreement on ethical standards, one that also has to include China and the US. Otherwise, the research and development of Artificial Intelligence will simply migrate to where it has free rein. And above all, he demands that the discussion on ethics be moved from the committees and events of Google and Facebook back to where it can be conducted independently: to the universities.


This article is licensed under Creative Commons BY-NC-SA 4.0.

Photo by AbsolutVision on Unsplash

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.