Ethics and algorithmic processes for decision making and decision support
Current ethical debates about the consequences of automation focus on the rights of individuals. But algorithmic processes exhibit a collective dimension. In our working paper, we therefore suggest ways to better take this dimension into account.
Far from being a thing of the future, automated decision-making informed by algorithms (ADM) is already a widespread phenomenon in our contemporary society. It is used in contexts as varied as advanced driver assistance systems, where cars are caused to brake in case of danger, and software packages that decide whether or not a person is eligible for a bank loan. Actions of government are also increasingly supported by ADM systems, whether in “predictive policing” or deciding whether a person should be released from prison. What is more, ADM is only just in its infancy: in just a few years’ time, every single person will be affected daily in one way or another by decisions reached using algorithmic processes. Automation is set to play a part in every area of politics and law.
Current ethical debates about the consequences of automation generally focus on the rights of individuals. However, algorithmic processes – the major component of automated systems – exhibit a collective dimension first and foremost. This can only be addressed partially at the level of individual rights. For this reason, existing ethical and legal criteria are not suitable (or, at least, are inadequate) when considering algorithms generally. They lead to a conceptual blurring with regard to issues such as privacy and discrimination, when information that could potentially be misused to discriminate illegitimately is declared private. Our aim in the present article is, first, to bring a measure of clarity to the debate so that such blurring can be avoided in the future. In addition to this, we discuss ethical criteria for technology which, in the form of universal abstract principles, are to be applied to all societal contexts.
Given that the issue of ethics is always also about specific kinds of action and about responsibility for this action, however, it is inevitably dependent on structural and situational contextualization. The rules that apply to the state or to a government, for example, can hardly be applied to citizens as individuals. While this differentiation is standard practice in the realms of ethics and constitutionalism, it has so far been lacking in the debate about automation.
The present paper seeks to contribute to a differentiated ethical and legal debate by introducing a taxonomy. This taxonomy is focused on the issue of action as well as on the dimensions of potential harm inherent in automation. We begin by explaining why technology-neutral ethics are needed. We then look at how actions and decision-making occur within automation before proposing a taxonomy that provides a structure for classifying the various risks and conflicts more appropriately, thus enabling a more differentiated procedure for developing ethical criteria.
We therefore propose a categorization that distinguishes between the category in which algorithmic processes are oriented towards the collective publicness (or social goods), and we refer to the algorithmic processes dedicated to the individual as the category of individual goods. The social goods category is divided into the subcategories societal frame and collective goods.
This taxonomy structures the public/publicness as a dimension in its entirety as a complex structure which cannot be reduced to opinions and information but rather includes collective goods on the one hand and looks at individual and collective interactions on the other. This offers a better ethical contextualization for working out differentiated ethical criteria that highlight a technology-neutral, values-oriented approach.
[su_note note_color="#c6f7ca" text_color="#a7000c"]Download the complete Working Paper as a PDF.[/su_note]
Update June 7th: in last week's version of the Working Paper, we forgot to mention that Kathleen A. Cross translated the paper; also, the bibliography was missing. Please excuse the mistake; the paper has been updated.
The development of this working paper was made possible by a fellowship of the Bucerius Lab of ZEIT-Stiftung Ebelin und Gerd Bucerius.