Submission to the Report of the United Nations Special Rapporteur on extreme poverty and human rights

The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, is preparing a thematic report on digital technology in national social protection systems and its human rights impact, especially on those living in poverty.

The United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, is preparing a thematic report on digital technology in national social protection systems and its human rights impact, especially on those living in poverty. The report will be presented to the UN General Assembly in New York in October 2019.

AlgorithmWatch supports his efforts to shed light on the positive and negative human rights impacts resulting from the introduction of ADM systems and digital technologies in social security systems. We welcome in particular and look forward to its global and comparative perspective – taking into account the effects of domestic political, economic, legal and cultural contexts which result in different outcomes.

Answering the call for submissions, we summarized our findings on issues such as the lack of transparency of the configuration and deployment of ADM in public welfare systems, the collective and societal impacts of its use, the political context and wider circumstances in which these systems are implemented, as well as the role of private businesses. Our previous publications with concrete examples from different European countries and a more detailed case study on Germany provide as a basis for assessing the impact of these developments on economic, social and cultural, as well as civil and political human rights and highlight areas for further research and the need for action.

Why automated decision-making instead of Artificial Intelligence?

Algorithmically controlled, automated decision-making or decision support systems are procedures in which decisions are initially – partially or completely – delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action. By saying systems instead of technologies we point to the fact that an ADM system, in an increasingly common use of the term, is a socio-technological framework that encompasses a decision-making model, an algorithm that translates this model into computable code, the data this code uses as an input—either to ‘learn’ from it or to analyse it by applying the model—the interpretation of the output and the entire political and economic environment surrounding its use.

Question 1 | specific case studies involving the introduction of digital technologies in national social protection systems

General Findings in previous research

AlgorithmWatch’s Atlas of Automation provides an overview of automated decision-making (ADM) systems used in Germany and is addressing the question of how these systems affect access to public goods and services as well as exert civil liberties, especially for people who can be considered disadvantaged or marginalized. The Atlas refers not only to the potential for discrimination that results from the automation of processes and decisions, as well as from pre-defined and human-made data bases and models, but also to opportunities and advantages that are made possible or conceivable through the use of automated decisions. In our report on Automating Society in the EU, we further identified more than 60 examples of ADM systems already used, of which most were deployed in the public sector of the different countries.

The examples in our reports include systems identifying children vulnerable to neglect in Denmark (on hold due to public pressure), detecting learning problems in primary and secondary schools to help teachers find "problematic" pupils in Slovenia, and initiatives on municipality level to detect child abuse and/or domestic violence in the Netherlands. Systems already decide which patients get treatment in the public health system in Italy, assign, process and control social benefits in Sweden, Finland and Denmark and calculate personalized budgets for social care in the United Kingdom, among many others. You can find credit scoring systems and Predictive Policing mechanism in many EU countries – the range of applications of ADM systems has broadened to almost all aspects of daily life.

The number of scoring and risk assessment systems in public administration and government practice is increasing – although differing significantly between regions and levels – and used for purposes like identification and categorization of citizens, allocation of services and predicting behaviour.

Based on those research findings, we see a strong need to debate the deployment of ADM systems in the public sector and social security institutions as well as their impact on the state-citizen relationship.

The mere existence of the monopoly of force and the inevitability of public social security services for those in need make clear that the involvement of public entities in the data collection and processing is a decisive aspect. At the same time the implementation of ADM systems is embedded in developments like datafication and systems interoperability efforts –from EU level to local councils – who are struggling with austerity measures and aim to tackle these challenges with integrated data warehouses. Research and discussion is furthermore needed on the often-neglected dimensions of potential collective and societal harms.

Taking into account the interrelation and (and often not explicitly regulated) shared ownership of public and private actors in this field, this urges an even closer look, as the existing potential of ADM systems to contribute to a better needs assessment and problem solving, efficient use of resources, and more transparent and accountable decisions depends on the transparency and accountability of the use of these systems. (See also EPRS 2019)

Question 3 | What human rights concerns might arise in connection with the introduction of digital technologies in social protection systems?

Human rights perspective on digital technologies in social protection systems

Uncertainty about use

ADM systems are used to make life-changing decisions. Secrecy of the current status quo of the deployment of such ADM systems in the public sector, however, the lack of systematic information on where and how these are used, is the rule in Germany and other European countries.

As evidenced by the diverse responses to parliamentary inquiries, freedom of information requests of civil society actors and interviews we convened during summer and autumn 2018, there seem to be no official guidelines in place for disclosing details on the uses of ADM systems to the public. (See also EPRS 2019)

Uncertainties about collective impacts

We found – and this is also emphasized by organizations with a more focused human rights perspective on the topic – that there are strong collective and societal effects of these systems. Compared to individual harms, the collective and societal harms are in many cases not sufficiently addressed by a human rights based approach. This adds to the factors that make it difficult to expose and challenge human rights violations, when these stay unnoticed. It would be therefore crucial to find the right frame to address these collective impacts and dimensions.

The right to social security and an adequate standard of living

In addition to potential infringements of privacy and data protection rights, ADM systems deployed in the public sector affect social and economical rights not only of individuals but groups of society as “collective subjects” to these systems. The potential harms caused by automated decision-making range from loss of opportunity (ADM in insurances and social benefits, employment), economic exclusion (credit scoring, HR analytics), social detriment (reinforcement of biases and structural discrimination in ADM) and loss of liberty (Predictive Policing, ADM in judicial systems). (See EPRS 2019)

Impact on vulnerable groups – Is the impact on people living in poverty different from people not living in poverty?

The manifestation of poverty in multiple vulnerabilities leads to people living in poverty finding themselves more often dependent to the various support systems of social welfare. Vulnerable and marginalized communities are less able to escape data accumulation about them when these services are provided by the public sector. The increasing interoperability of support systems and social benefit mechanisms – and the underlying data collection – affects them more than those using public services only for administrative matters, like renewing an ID card.

Various systems we found in our report on Automating Society in Europe are directed towards the well being of vulnerable groups. Only in some we found indicators for the purpose of its deployment, and often the implementation of the ADM systems in the public sector could be connected to a policy agenda and embedded in a political context. A decisive aspect to take into account is therefore the motivation and the objectives behind the introduction and application of such systems.

One further concern is the risk of stigma and stereotyping of particular groups with the labelling of ‘risk’ and targeting based on false calculations. Others are the lack of transparency and consent.

Business and human rights – Involvement of corporations

Our findings on the German market for public administration software – in this case software covering unemployment services, social and youth welfare – show supporting evidence for the observation of increasing market dominance by some companies in the respective national markets. [1] (See also EPRS 2019) In the field of predictive policing a worrying example from Hessia showed how in some cases private companies not only develop and provide the software, but are responsible for its operation itself (see below case study Germany).

Question 4 | What contextual circumstances affect the impact of digital technologies in specific social protection systems on human rights?

Contribution to the schemes to the surveillance, control, and exclusion of the poor?

It can be observed that ADM systems often focus on risk assessment and risk prediction. Some scholars identify the predominant security discourse of the past two decades as source of the ‘risk management’ focus of many ADM systems (Coaffee & Murakami Wood, 2006; Aradou & Blanke, 2015, quoted in Dencik et al 2018). It is said that the political context the applications are embedded in influences the design and use of the systems:

ADM as part of reforms & policy agendas

In Germany newly introduced “predictive policing” systems based on new police laws are accompanied by a public debate along security and terrorism, although crime statistics don’t show any increase in the respective regions and states (LINK).

In 2014 the Polish government introduced a complex reform of the public employment service Publiczne Służby Zatrudnienia (PSZ) which runs job centres in almost 350 towns and cities. In this setting a controversial scoring system profiling the unemployed was introduced, deciding on the support unemployed can apply for and training they can receive. The reactions showed how a lack of transparency of the algorithmic basis and safeguards against errors of the system creates tension with the human and social rights of the unemployed. Initially intended to be an advisory tool, some early statistics indicate that clerks were deciding to override the result only in 1 in 100 cases. They stated time constraints as one reason, but also feared repercussions from supervisors if a decision was later called into question – and 80 % demanded changes of the system.

At the same time the use of the very same system varies along factors like organizational culture, individual preferences and the local constraints – in addition to the design of the tool or technology itself. The claim and stated purpose of the responsible institutions to create more centralized, “more objective categorizing machines” is not met in practice.

ADM and the GDPR – Regulation beyond rights to privacy and data protection?

Critics and rights advocates are questioning the scope and effectiveness of the application of the GDPR to ADM systems and see little room for manoeuvre when it comes to explicit, well-defined and effectual rights, especially against group-related and societal risks – especially affecting economic and social rights – and the impact of automated decision-making systems. It further does not reflect the diversity of ADM systems already implemented, including various scenarios in which people are involved, who consciously or unconsciously implement ADM or follow the recommendations unquestioningly.

It remains a matter of controversy among experts regarding what the GDPR defines as a “decision” or what circumstances and which “legal effects” have to occur for the prohibition of automated decision-making to apply. (Art. 22)

Question 5 | Would you have specific recommendations about addressing both the human rights risks involved in the introduction of digital technologies in social protection systems as well as maximizing positive human rights outcomes?

Recommendations

Right to effective remedy and the question of responsibility

Demands on ADM systems must be distinguished from demands on bureaucracies themselves. As scholars in the field agree on, technology can reinforce the injustices of a system already in place, but often rightly emphasize the fact that the problems and risks of discrimination have been there before. To discuss the question of the allocation of responsibility is crucial, but one must take into account structural and systematic implications that existed before the application of ADM systems and have to be tackled with the same urgency.

What is needed is a review and adjustment of procedures for challenging bureaucratic decisions (automated or not): Whether a case worker or administrator makes a decision based on instructions or on the output of a computer system should not matter as long as s*he does not have any possibility to challenge or adjust the decision. The question of responsibility has to be dealt with on a different level. However, citizens have to have the right to access qualified contact persons who know how systems work and have the resources to intervene. Citizens as well as clerks must not face any negative consequences for requesting or providing this information.

Strengthen administration and introduce a public register

Our research for the Atlas of Automation has made us acutely aware of a universe of different software systems in all kinds of branches of administration and other service sectors that are relevant to participation. So far, a register of such systems that allows for an evaluation in regard to the degree of automation and its effect on participation, and on society, does not exist. In order to ensure democratic debate and control, municipalities, federal states and the national government in Germany need to create such a register. The purpose for which an ADM system is deployed, who was involved in its design, development and training, which decision model is underlying and how the quality and effectiveness of the system was verified would be published in that register.

Such a survey of the current state of affairs in Germany would also strengthen the administration because it could keep an overview on its ability to act. On the one hand, employees should be trained to see more clearly to which extent software (subtly) prepares decisions or already effectively takes them. If applicable, existing software based processes should be reviewed to detect bias and discrimination. On the other hand, staff should also be able to voice recommendations and to develop procedures for implementing ADM where it is appropriate. Furthermore, mechanisms for the evaluation of the respective software systems, as well as methods to conceptualise ADM, need to be established within the administration.

Findings from German labour rights

The complexity of algorithms leads to problems of providing evidence when it comes to breaches of law and (human) rights violations: In Germany, if a discriminatory differentiation is made by the algorithm, this will be attributed to the employer. However, in order to be liable for a discriminatory decision, “knowledge” of the fact it took place needs to be established. Whether this can be done in the case of an algorithmically driven system whose inner workings are not known to the company employing it is currently unclear. This can be counteracted by documentation obligations, similar e.g. to the regulation of the financial market.

Labour rights, which also apply when the collection and processing of data is outsourced to a third party, can be a strong lever to restrict incomplete, non-transparent and illegitimate decisions based on ADM systems. If, however, only anonymous data is collected or only the performance of an entire department or group is evaluated, workers councils usually have no say. It is therefore essential to consciously tailor oversight mechanisms and governance structures to the collective effects of ADM systems.

What if the mere violation of data protection law may be unlawful, but may not lead to verifiable damage? Who must be responsible for proving and providing the evidence? Often this responsibility still lies with those affected - and in many cases they are “rationally apathetic”, i.e. they do not pursue their claims because it is not worth the effort.

Regulation in other sectors already includes emergency precautions for unforeseen disruptions or the limitation of the application of fully automated decisions to certain fields. A limitation to procedures that have no irreversible consequences for humans could be one option, at least for systems with self-learning algorithms.

Case Study – Germany

ADM & unemployment support system

Background: Digitisation of the social welfare system in Germany

Since 2017 a new regulation foresees the digitization of all administrative and social services in Germany by 2022. [LINK] The Federal Government defines welfare services according to the Social Security Act II as administrative services. Although digitisation does not mean the implementation of automated systems, trends for an increase of testing and experiencing with ADM systems can be found. However, trying to assess the current status quo of the deployment of ADM systems in the public sector is hardly possible and findings difficult to evaluate. Answers to parliamentary inquiries on the topic gave some insights, but stay vague and seem to contradict stakeholders we interviewed, when it comes to the definition of and identifying the level of automation of decision-making processes. [LINK] There exists no register of ADM systems used in the public sector.

In Germany, there is a two-fold system of unemployment support agencies: Around 300 so-called job centres are run under the umbrella of the federal employment agency in cooperation with municipalities, around 100 are run independently by the municipalities themselves (“Optionskommunen”). These are free to decide on the deployment of technologies and ADM systems on their own, as long as they meet the legal and administrational requirements.

Freedom of Information Law in Germany

The Freedom of Information Law (Informationsfreiheitsgesetz, IFG) was introduced in Germany in 2006. However, state authorities often refuse to release or even block the release of crucial and sensitive information. Applicants can file objections and legal action against refusals and fee notices, but since 2009, the Federal Government has spent more than 1.8 million euros to ward off claims for information under the IFG or press laws. [LINK] (German only)

Research and Remarks

There is no public overview about which state or municipality is using which technologies and systems. The result of a parliamentary inquiry was an 80 pages list of cryptically described IT systems deployed or testes by the federal employment agency alone.

Among them, e.g. we found the following description of one system and this is the clearest information we got (own translations):

In some of the “independent” municipalities, automation of distribution of job offers via email seems to be already deployed as part of a passive ADM system of profiling and matching of the unemployed. One provider of these systems confirmed that the case workers using their system can choose to (de-) activate the fully automated matching mechanism, which sends incoming job adds directly to prior identified potential candidates via email.

Right to access

There is a right to access your personal file at the unemployment agency, but in an answer to a parliamentary inquiry, no numbers could be/were given on the number of access requests filed and granted.

The initiative FragDenStaat is using tools like litigation, crowd-sourced FOI requests and campaigns like “Ask the Unemployment Agency” to increase transparency and push public debate. In 2019 they aim to include institutions on EU Level. [LINK]

ADM & Credit scoring

Credit scoring affects society in diverse, more and less visible ways. Credit scoring companies at the same time do not have to comply with anti-discrimination law, at least in different European countries, if there is statistical evidence that people behave differently based on e.g. age and gender. With the campaign OpenSCHUFA, AlgorithmWatch and Open Knowledge Foundation Germany collected credit scores from around 4,000 citizens assigned by SCHUFA Holding AG, Germany’s dominant credit scoring company. These scores were then analysed by journalists from Der Spiegel and the Bayerischer Rundfunk public broadcasting station. One of the results found in the dataset was that young males were frequently rated worse than older people with otherwise similar features. The amount of data collected did not allow to substantiate a causal relationship. SCHUFA declined to release more data to (in)validate the assumption that such a possibility exists.

ADM & Asylum

The Federal Office for Migration and Refugees (Bundesamt für Migration und Fluechtlinge – BAMF) aims to tackle its procedural problems with a “Digitalisation Agenda 2020” [LINK].

In 2016, an “integrated identity management” system was introduced. Today, it contains several modules that are available for case officers to use as supporting tools in their decisions. The system is mainly aimed at finding out whether the details given by those seeking protection are plausible. For example, software is used to try and recognize the language and dialect of origin of a person from audio recordings. Initially, the error rate of the so-called speech biometrics system was approximately 20 per cent; according to BAMF this figure was reduced to 15 per cent by now. By mid November 2018, the procedure had been used about 6,000 times, meaning that it must have produced approximately 900 false results. [LINK]

Another software that is used has its origins in military forensics, the secret services and the police [LINK]. It is able to analyse telephone data, past connection data and saved telephone numbers. The BAMF claims that refugees give their permission to access their telephones voluntarily. In 2018, the insights gained from the analysis of thousands of refugees’ telephones resulted in usable results in less than 100 cases. [LINK] Furthermore the BAMF uses software to compare photographic portraits and various possible transliterations of Arabic names into Romanized letters. [LINK] The BAMF states that the use of these automated procedures has been a success. However, critics think that the cost of the procedures and the number of errors are too high. They also complain about the lack of transparency in the way the software systems function, and the lack of scientific monitoring used to evaluate the effectiveness of the procedures. [LINK] [LINK]

ADM & Predictive Policing

At present, Predictive Policing systems are deployed in six federal states. Apart from systems developed by the law enforcement authorities themselves, systems developed by various private manufacturers are implemented. The so far most widely-applied systems are based on geographical data and statistical analysis trying to identify areas where burglaries and theft offenses are more likely to occur. The prognoses are based on models such as the near-repeat-theory, which argues that burglars tend to strike again near the location of a successful break-in. These systems are used for the allocation of resources and patrols. It is unclear, however, whether such place-oriented systems have an effect: An accompanying study by the Max Planck Institute for Foreign and International Criminal Law in Freiburg was unable to find any clear evidence of effective prevention or decrease in crime during the test phase which ran between 2015 and 2017 in Stuttgart and Karlsruhe [LINK]. Moreover, there is a strong need to further examine whether these so-called Predictive Policing systems might create re-enforcing effects leading to stigmatization of specific neighbourhoods and parts of cities and other areas.[2] [LINK]

The city of Mannheim in Baden-Wuerttemberg launched an “intelligent video surveillance” project, developed in cooperation with the Fraunhofer Institute for Optronics, Systems Engineering and Image Evaluation. The technology is not based on face recognition but on “automatic image processing” (person-based system). [LINK] Installed sequentially, by 2020 around “76 cameras will be used to monitor people in central squares and streets in the city centre and scan their behaviour for certain patterns” [LINK] that “indicate criminal offences such as hitting, running, kicking, falling, recognised by appropriate algorithms and immediately reported to the police“.

Critics warn that the application of such behavioural scanners, here in the form of video surveillance with motion pattern recognition, “exerts a strong conformity pressure and at the same time generates many false alarms”, as “it is also not transparent to which ‘unnatural movements’ the algorithms are trained to react. Thus, lawful behaviour, such as prolonged stays at one place, could be included in the algorithms as suspicious facts.” [LINK]

“Hessen-Data“, acquired in 2017 by the government of the federal state of Hessia, on the other hand works as a person-related system. The software is provided by Palantir, a private US-based software company. As far as it is known, the system combines data from social media with entries in various police databases as well as connection data from telephone surveillance in order to identify potential offenders. By “profiling”, it is intended to identify potential terrorists. Hessias’ government is planning to extend its deployment by using it to detect child abuse and abduction. The necessary legal foundation for “Hessen-Data“ was provided by Hessias’ Law on Police which was revised in 2018. An Investigative Committee, reporting to the state parliament, is currently trying to clarify issues around the acquisition of the system and looks into questions relating to data protection. Apparently, the system is supervised by the staff of Palantir, who as a result might have access to private data related to individual citizens. [LINK]

References

See also direct links in the text

AlgorithmWatch (2019) Atlas of Automation: Automated decision-making and participation in Germany, 1. edition, Berlin.

AlgorithmWatch (2019) Automated Society – Taking Stock of Automated Decision-Making in the EU, Berlin.

Dencik, L., Hintz, A., Redden, J. and Warne, H. (2018) Data Scores as Governance: Investigating uses of citizen scoring in public services. Research Report. Cardiff University.

European Parliamentary Research Service (EPRS) - Scientific Foresight Unit (STOA) (2019) Understanding algorithmic decision-making: Opportunities and challenges.

[1] According to an interview partner the dominant private providers of municipal administration and welfare software (used next to the governmental system) are PROSOZ Herten GmbH, prosozial GmbH, Lämmerzahl GmbH, AKDN-sozial and AKDB – in comparison to 12-14 companies before the Hartz4 reforms in 2004.

[2] Further research around Predictive Policing systems include questions (a) the use of supervised learning vs. reinforcing learning, (b) the lack of use of excising ‘systemic missing data models’; lack of factoring external costs / impacts; lack of defining values and goals prior system development and constantly, (c) models being rendered inaccurate by the behavioural change they caused, (d) concerns of representation in data (e.g. social media data, police data) as well as unbiased data vs. selection bias and structural discrimination, and many more (see also report by Amnesty International Police and Human Rights Programme (upcoming, 2019))

Read more on our policy & advocacy work on ADM in the public sector.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.