In Germany, a number of commissions, expert groups, platforms and organisations were set up by government and parliament to assess the consequence of a wider application of Artificial Intelligence. While it is to be welcomed that different stakeholders engage in the debate, this profusion introduces the risk that expert groups end up working in parallel on overlapping issues. In addition, some of these groups have very short timelines to produce outcomes on a wide range of questions. Also, some of these outcomes are supposed to feed into government policy, but the process to accomplish this is at times unclear, especially because there is no specific framework for cooperation, neither to develop joint strategies nor to prepare implementations.
Other stakeholder groups are actively working in the field of automated decision-making (ADM). Business associations, academic societies, and a number of civil society organisations have already conducted research and published analyses, discussion and position papers on ADM. There seems to be a consensus that the developments offer a lot of opportunity to benefit individuals and society, but that there is a lot of work needed to keep the risks in check.
Political debates on aspects of automation – government and parliament
German AI Strategy
The national AI strategy [DE 1], which was developed under the joint leadership of the Federal Ministries of Education and Research, Economics and Energy, and Labour and Social Affairs, was presented at the “Digital Summit” in early December 2018 in Nuremberg.
As part of the federal government’s implementation strategy for digitisation (“Hightech-Strategie 2025”), the AI strategy is to be further developed and adapted at the beginning of 2020 “according to the state of discussion and requirements”.
The foreword summarises the strategy’s goals as follows:
“The strategy is based on the claim to embed a technology as profound as Artificial Intelligence, which may also be used in sensitive areas of life, ethically, legally, culturally and institutionally in such a way that fundamental social values and individual rights are preserved and the technology serves society and mankind.”
At the heart of the document is the plan to make “AI made in Germany” a “globally recognised seal of quality”. The strategy focuses on three goals:
- To make Germany and Europe a leading location for the development and application of AI technologies and thus contribute to securing Germany‘s future competitiveness
- To ensure the responsible development and use of AI for the common good
- To embed AI ethically, legally, culturally and institutionally into society, based on a broad dialogue within society and an active role of politics in shaping the issue
In order to develop Artificial Intelligence in a way that is responsible and focused on the public good, the strategy announces a range of measures. The federal government
- plans to create an observatory and vows to campaign for similar observatories on the EU and international level
- wants to organise a European and transatlantic dialogue focusing on a human-centric development of AI in the workplace
- vows to safeguard labour rights when AI is deployed and used in the workplace
- announces it will actively shape the ethical, legislative, cultural and institutional embedding of AI, facilitating a broad dialogue within society
- wants to develop guidelines for the design and use of AI systems in compliance with data protection law in round-table consultations with data protection supervisory authorities and business associations
- says it will promote the development of innovative applications that support autonomy, social and cultural participation, and the protection of citizens‘ privacy
- announces the creation of a funding scheme to educate citizens and to support the public-minded design of technology
- says it will develop the platform “Lernende Systeme” into an AI platform to organise a dialogue between politics, academia, business and civil society
- pledges to find a European approach to data-based business models that reflect the common values, and their economic and social structure
- aims to reflect whether the regulatory framework needs to be further developed to secure a high level of legal certainty, and asserts it will promote and require the respect of ethical and legal principles throughout the process of AI development and application
The government “sees itself under the obligation” to support the development of „AI made in Germany“ as “a technology for the good and benefit of state and society”. By promoting diverse AI applications with the aim to achieve “tangible social progress in the interest of the citizens” it aims to focus on the benefits for humans and the environment and to continue an intensive exchange with all stakeholders. It also points to AI as “a collective national task”, which requires collective action and cross-sectoral measures. To achieve these goals, the strategy also pleads for stronger cooperation within Europe, especially when it comes to harmonised and ethically high standards for the use of AI technologies.
Implications of particular relevance to the participatory and fairness aspects of automated decision-making—which also have strong effects on the regulatory and policy framework for the application of ADM—can be found in the following paragraphs:
Section 3.1, focusing on Strengthening Research in Germany and Europe and section 3.8, Making data available and facilitating (re-)use introduce a “systematic approach to technology” that promotes research on methods for “monitoring and traceability of algorithmic prediction and decision-making systems”, as well as research on pseudonymisation and anonymisation methods, and on the compilation of synthetic training data. It states that decisions “must be comprehensible so that AI systems can be accepted as ‘trusted AI’ and meet legal requirements.”
In the section about the labour market (3.5), questions about the increasing deployment of AI—and ADM—in the processes of hiring, transfers and dismissals, as well as the monitoring of workers’ performance or behaviour, are addressed. The authors emphasise that workers’ councils theoretically already have the means, within the framework of their labour representation rights, to influence the application of AI in these fields. They recognise, however, that people need to be empowered to exercise these rights, especially in the light of knowledge asymmetries. Here, the government asserts that it will examine whether an independent employee data protection law can create more legal certainty for the introduction of respective applications in the workplace.
With regard to the objective to Use AI for public tasks and adapt expertise of the administration (3.7), the government is hesitant to take a clear stand on the use of AI-based systems. It describes AI as “an an instrument to contribute information to decision-making that cannot be obtained in an adequate timeframe without AI”, but clarifies that “police, intelligence and military assessments and decisions based on them will continue to be in the hands of the employees of the authorities.” The strategy mentions that “other areas of application are predictive policing (preventive averting of danger), the protection of children and adolescents against sexualised violence on the Internet, and the fight against and prosecution of the dissemination of abuse representations or social media forensics for the creation of personal profiles”, but the strategy does not indicate if and to what extent these could be introduced.
Because AI applications will “increasingly contribute to decision-making in everyday life or control it in the background”, the government plans to Review the Regulatory Framework (3.9) for gaps in relation to algorithmic and AI-based decision-making systems, services and products. If necessary, it vows to adapt regulation to make systems accountable for possible inadmissible discrimination and disadvantage. The strategy describes how “transparency, traceability and verifiability of AI systems“ can be established in order to “provide effective protection against bias, discrimination, manipulation or other abusive uses, in particular when using algorithm-based prediction and decision systems.”
The government proposes “to examine the establishment or expansion of government agencies and private auditing institutions to monitor algorithmic decisions with the aim of preventing abusive use and discrimination and averting negative social consequences”. This is to include the development of auditing standards and impact assessment standards. “This control system should be able to request full disclosure of all elements of the AI/algorithmic decision-making process without the need to disclose trade secrets.”
The strategy further develops ideas to review existing regulation and standards and enhance “standardisation”, especially in regard to classifications and terminologies of AI and ethical standards in line with the “ethics by design” approach (3.10).
Digital Cabinet and Digital Council
The newly created Cabinet Committee on Digitisation discusses questions of AI and examines answers and solutions feeding into the (implementation of the) national strategy. [DE 2]
The Digital Council, located in the Chancellery, acts as a consultative body, intended to facilitate a close exchange between politics and national and international experts. Appointed by the government, its members come from a variety of sectors: scientists and practitioners, founders of start-ups and established entrepreneurs. Civil society is not represented. The council members are tasked with finding ways to implement the projects to be defined in the AI strategy. [DE 3]
Platform for Artificial Intelligence
The Plattform Lernende Systeme (Platform for Artificial Intelligence) was launched by the Federal Ministry of Education and Research in 2017 at the suggestion of acatech (German Academy for Science and Engineering), and will end by 2022. The platform aims to design self-learning systems for the benefit of society. [DE 4] In different working groups, experts from science, industry, politics and civil society analyse the opportunities and challenges of AI, and develop scenarios and roadmaps to inform decision-makers. [DE 5] The working group on “IT Security, Privacy, Legal and Ethical Framework” focuses on how AI can be implemented in a way that respects human autonomy, fundamental rights, equal opportunities, and does not discriminate. In general, the group ascertains “how society can actively participate during the implementation of self-learning, and therefore ADM systems”. [DE 6] In the context of the German AI strategy, the platform presented its new interactive information website on actors, strategies and technologies related to AI at the national, EU and international level. [DE 7]
Enquete Commission on Artificial Intelligence
In June 2018, the German Bundestag set up the “Study Commission Artificial Intelligence – Social Responsibility and Economic, Social and Ecological Potential”. [DE 8] [DE 9] This so-called Enquete Commission is comprised of 19 MPs from all parliamentary parties and 19 external experts appointed by parliamentary groups. [DE 10] The mandate of the commission is to examine the opportunities as well as the challenges of AI and its effects on individuals, society, economy, labour and the public sector. The commission’s task is to develop recommendations for policy makers based on its findings in order to “use the opportunities offered by AI and to minimise the risks.” [DE 8] The sectors of public administration, mobility, health, care, autonomy, ageing, education, defence, environment, climate and consumer protection will be at the centre of its inquiries. Furthermore, the commission will analyse the impact of AI on democratic processes, gender equality, privacy protection, economy, and labour rights. One of the crucial issues of the commission’s work will be the assessment of regulation for scenarios in which automated decisions about individuals and society are prepared, recommended or made. The final report is due in 2020. [DE 11]
Political debates on aspects of automation – Civil Society and Academia
The NGO AlgorithmWatch [DE 12] argues that automated decision-making systems are never neutral but shaped by economic incentives, values of the developers, legal frameworks, political debates and power structures beyond the mere software developing process. They campaign for more intelligibility of ADM systems and more democratic control. The organisation’s mission is to analyse and watch ADM processes, explain how they work to a broad audience and to engage with decision-makers to develop strategies for the beneficial implementation of ADM. At the moment, AlgorithmWatch is looking at a German credit scoring algorithm, the automation of human resources management, and is mapping the growth of automation across a variety of sectors.
Alliance of Consumer Protection Agencies
Bundesverband der Verbraucherzentralen (alliance of consumer protection agencies
[DE 13]) is funded in large part by government grants. It advocates for consumer protection in a wide range of fields, including e-commerce, social networks and networked gadgets. In a detailed position paper published in late 2017 [DE 14], the alliance drafted a set of requirements that it believes automated decision-making processes need to fulfil in order to be in line with consumer protection laws. These requirements include comprehensibility of ADM processes, redress mechanisms for affected individuals, labelling of processes that involve ADM, adaption of the liability framework, and intensified research into such systems.
In its project Ethics of Algorithms [DE 15], a team at Bertelsmann Stiftung analyses the influence of algorithmic decision-making on society. The project’s stated goal is to shape algorithmic systems in order to increase social inclusion (Teilhabe) by developing solutions that align technology and society.
Bitkom (Association for IT, Telecommunications and New Media [DE 16]) represents digital companies in Germany. It publishes guidelines and position papers on technological and legislative developments. In a publication they produced on AI, they examine in great detail the economic significance, social challenges and human responsibility in relation to AI and automated decision-making. ADM's ethical, legal and regulatory implications are analysed from the point of view of corporate and social responsibility. [DE 17] In a separate paper focused on the “responsible use of AI and ADM”, the authors recommend that companies develop internal and external guidelines, provide transparency as to where and how ADM is used, conduct a cost-benefit calculation with regard to users, guarantee accuracy of data and document provenance, address and reduce machine bias, and design high-impact ADM systems in a way that a human retains the final decision. [DE 18]
Chaos Computer Club
The Chaos Computer Club (CCC [DE 19]) is an association of hackers who advocate for more transparency in government, freedom of information, and the human right to communication. The annual conference, Chaos Communication Congress, is one of the most popular events of its kind with around 12,000 participants. In recent years, the conference featured various presentations and discussions [DE 20] about how automated decision-making processes change society.
German Informatics Society
With almost 20,000 private members and about 250 corporate members, Gesellschaft für Informatik (German Informatics Society [DE 21]) is the largest association for computer science professionals in the German-speaking world. In a recent study, the authors analysed what they call “algorithmic decision-making” with a focus on scoring, ranging from credit scoring to social credit systems. [DE 22] They recommend intensifying inter-disciplinary research into ADM, incorporating ADM in curricula of computer science and law, fostering data literacy in economics and social sciences, developing procedures for the testing and auditing of ADM systems, standardised interfaces, transparency requirements, and creating a public agency to oversee ADM. In a response to the government’s AI strategy [DE 23], the society stated that a European contribution to the development of AI has to be the development of explainable algorithmic decision-making and explainable AI that prevents discrimination.
Initiative D21 [DE 24] is a network of representatives from industry, politics, science and civil society. Its working group on ethics published discussion papers on ADM in medicine, public administration, elderly care and other sectors. [DE 25]
The platform Netzpolitik.org [DE 26] states that its mission is to defend digital rights. Its team of authors cover the debates about how political regulation changes the Internet and how, in turn, the Internet changes politics, public discourse and society. Articles that deal with ADM focus mostly on the issue from a data protection and human rights perspective.
Robotics & AI Law Society
As a relatively new actor in the field of AI in Germany, Robotics & AI Law Society (RAILS) states that it wants to address challenges of “(AI) and (intelligent) robotics” and “actively shape the discussion about the current and future national and international legal framework for AI and robotics by identifying the need for regulatory action and developing concrete recommendations.” [DE 27] The organisation also states that its “aim is to ensure that intelligent systems are designed in a responsible way, providing a legal framework that facilitates technical developments, avoids discrimination, ensures equal treatment and transparency, protects fundamental democratic principles and ensures that all parties involved are adequately participating in the economic results of the digitalization.” [DE 28]
Stiftung Neue Verantwortung
The think tank Stiftung Neue Verantwortung (Foundation New Responsibility) [DE 29]
develops positions on how German politics can shape technological change in society, the economy, and the public sector. It works on ideas of how algorithms can be used to foster the common good, explore the potential of ADM for fairer decision-making processes, and attempt to create a development process for ADM that respects the common good in its design.
Regulatory and self-regulatory measures
Data Ethics Commission
The Data Ethics Commission was set up by the government to “develop ethical standards and guidelines for the protection of individuals, the preservation of social cohesion and the safeguarding and promotion of prosperity in the information age.” [DE 30] [DE 31] The commission is asked to propose a “development framework for data policy, the use of algorithms, artificial intelligence and digital innovations” for the government and parliament. The committee consists of 16 experts from politics, academia and industry and is led by the Federal Ministry for Justice and Consumer Protection and the Federal Ministry of the Interior, Building and Community.
One of three key areas addressed in the questions posed to the commission relates to algorithmic decision-making. It emphasises that “people are being evaluated by technical processes in more and more areas of life”. [DE 32] The Commission looks at how algorithms, AI and digital innovations affect the life of citizens, the economy or society as a whole. It also keeps an eye on what ethical and legal questions arise, and it gives advice on which technologies should be introduced in the future and how they should be used. In the first paper of recommendations sent to the government, the commission argued that ethics does not “mean primarily the definition of limits; on the contrary, when ethical considerations are addressed from the start of the development process, they can make a powerful contribution to design, supporting advisable and desirable applications.” [DE 33]
Data Protection Agencies’ joint position on transparency of algorithms
In a joint position paper [DE 34], the data protection agencies of the federal government and eight German federal states stated that greater transparency in the implementation of algorithms in the administration was indispensable for the protection of fundamental rights. The agencies demanded that if automated systems are used in the public sector, it is crucial that processes are intelligible, and can be audited and controlled. In addition, public administration officials have to be able to provide an explanation of the logic of the systems used and the consequences of their use. Self-learning systems must also be accompanied by technical tools to analyse and explain their methods. An audit trail should be created, and the software code should be made available to the administration and, if possible, to the public. According to the position paper, there need to be mechanisms for citizens to demand redress or reversal of decisions, and the processes must not be discriminating. In cases where there is a high risk for citizens, there needs to be a risk assessment done before deployment. Very sensitive systems should require authorisation by a public agency that has yet to be created.
Draft-law on automated driving
In 2017, the German government proposed the adoption of a law on automated driving. [DE 35] The draft law determines that the driver will hold the final responsibility over the car, even if the system was driving automatically. Hence, drivers must always be able to interrupt the automated driving and assume control themselves. The level of automation can be differentiated into partly automated, highly automated, fully automated and autonomous driving. Autonomous driving is not covered by the draft law.
Ethics Commission on Automated and Networked Driving
The Federal Ministry of Transport and Digital Infrastructure convened this commission in 2016 to work on ethical questions that arise with the introduction of self-driving cars. [DE 36] Under the premise that the automation of traffic promises to reduce accidents and increase road safety, the commission discussed an array of questions and in 2017, it published 20 rather abstract principles on automated and networked driving. [DE 37] These principles include the suggestion that automated systems can only be approved if, on balance, they promise to lead to a reduction of harm, and that a public agency for the assessment of accidents involving automated vehicles should be installed to gather evidence in order to guide future development.
German Ethics Council
In order to inform the public and encourage discussion in society, the German Ethics Council [DE 38] organises public events and provides information about its internal consultations. The Council addresses questions of ethics, society, science, medicine and law, and the consequences for individuals and society related to research and development, in particular in the field of life sciences and their application to humanity. In the past, the German Ethics Council has discussed ethical issues related to the health sector. Those discussions focussed on developments in synthetic biology in 2009, particularly on technological innovations and the implications in the field of genetic diagnosis and its use in medical practice 1 in 2012. [DE 39] [DE 40]
In 2017, the Ethics Council published a report on the consequences of automatically collecting, analysing and then inferring insights from large data sets in the health sector. [DE 41] It concluded that these processes lead to stratification – the algorithmic grouping of individuals based on certain characteristics. This can be useful for diagnostics and therapy, but the council cautions against erroneous attribution of data. “Big Data” in the context of health promises to be especially fruitful in biomedical research, healthcare provision, for insurers and employers, and in commercial applications. However, the council warns against the exploitation of individuals’ data, discrimination, the linking of health data to other data by commercial actors, an “excessive regime of self-control aided by such services and devices [that] can contribute to an exaggerated drive for optimisation detrimental to personal health”, and privacy breaches.
In 2017, the Council’s annual conference focused on “autonomous systems” in general, and discussed “how intelligent machines are changing us”. [DE 42]
German Institute for Standardisation
The German Institute for Standardisation (Deutsches Institut für Normung e.V. (DIN) [DE 43]) works on standardisation for e-mobility, energy transition, Industry 4.0, secure online identification, smart cities, and smart textiles. One working group, established at the beginning of 2018 as part of DIN’s Standards Committee on Information Technology and Applications (NIA), focuses on ethical and societal issues concerning AI. [DE 44] Their first step was to draft a working paper on the terminology of AI itself, a topic considered to be of particular importance. The development process of new norms is closed to the public and further details are not yet available.
ADM in Action
Automated management of energy infrastructure
The transition to renewable energies, technological innovation, and the more flexible trade of electrical energy across Europe continue to change and challenge Germany’s energy infrastructure. Smart grids and a growing number of private households producing their own electricity (so-called prosumers) only add to the challenge. ADM in the area of distribution grids ensures a more resilient supply and enables a smoother feed-in and control of decentralised and fluctuating electricity on different levels. On the transmission grid level, automation has already been in use for some time. It was established to support the complex task of distributing, monitoring and controlling electricity flows to ensure system and frequency stability. Today, the new generation of Supervisory Control and Data Acquisition (SCADA) systems, are an essential part of this infrastructure. They use machine learning and predictive analysis for real time monitoring and controlling in order to avoid frequency fluctuations or a power supply failure. [DE 45] At the same time, government experts point out that energy supply, as a critical infrastructure, is under special threat from cyber attacks. [DE 46] Advocacy groups also warn against extensive surveillance capabilities that come with the implementation of smart meters in private homes. [DE 47]
In Germany, like in many other countries, private credit scoring companies rate citizens’ creditworthiness. A low score means banks will not approve a loan or may reject a credit card application, and Internet service providers and phone companies may deny service. This means credit scoring has enormous influence on people’s ability to meaningfully participate in everyday life. In Germany, the leading company for scoring individuals, SCHUFA Holding AG, has a market share of up to 90 per cent (depending on the sector), so it wields immense power. The civil society organisations AlgorithmWatch and Open Knowledge Foundation Germany initiated a project called OpenSCHUFA. [DE 48] They asked individuals to donate their SCHUFA scores to the project, so that data journalists and researchers could systematically scrutinise the process. The results of the analysis were published in late November 2018. [DE 49] Data journalists from Spiegel Online and Bayerischer Rundfunk public broadcasting station identified various anomalies in the data. For instance, it was striking that a number of people were rated rather negatively even though SCHUFA had no negative information on them, e.g. on debt defaults. There were also noticeable differences between alternate versions of the SCHUFA scoring system. The credit report agency offers to its clients (such as local banks or telecommunication companies) a score that is specifically tailored to their business segment. In one of the available cases the scores differ by up to 10 per cent between version 2 and 3, depending on the version of the scoring system the client uses as a reference. This is an indicator that the SCHUFA system itself deems its version 2 to be somewhat lacking. However, version 2 is apparently still used by SCHUFA clients.
“Intelligent video surveillance system” in the city of Mannheim
The city of Mannheim in Baden-Wuerttemberg launched an “intelligent video surveillance” project, developed in cooperation with the Fraunhofer Institute for Optronics, Systems Engineering and Image Evaluation. The technology is not based on face recognition, but on “automatic image processing”. [DE 50]
Installed sequentially, by 2020 around “76 cameras will be used to monitor people in central squares and streets in the city centre and scan their behaviour for certain patterns” [DE 51] that “indicate criminal offences such as hitting, running, kicking, falling, recognised by appropriate algorithms and immediately reported to the police“. In this way, the “police can intervene quickly and purposefully and save resources in the future”, says the city’s website. Critics warn that the application of such behavioural scanners, here in the form of video surveillance with motion pattern recognition, “exerts a strong conformity pressure and at the same time generates many false alarms”, as “it is also not transparent to which ‘unnatural movements’ the algorithms are trained to react. Thus, lawful behaviour, such as prolonged stays at one place, could be included in the algorithms as suspicious facts.” [DE 51]
The stated goal of the municipal government is to make “the fight against street crime more efficient and provide more security to people in the city “. [DE 50] The system is one component of a “comprehensive security concept of the City of Mannheim”, which also includes a regular “urban security audit”, “Security Round Tables”, mobile security guards at certain exposed locations, and the promotion of crime prevention by the organisation ‘Security in Mannheim’.