NETHERLANDS
By Gijs van Til
In the Netherlands, the impact of automated decision-making (ADM) and Artificial Intelligence (AI) on individuals and society is predominantly discussed as part of the larger Dutch strategy on digitisation. As a prominent part of this strategy, the Dutch Digital Government Agenda plays an important role in setting out how digitisation of public administration at multiple administrative levels should progress and how ADM and AI could be involved in this process. Rather than policy, many of the steps taken by the government in this field focus on intensive research into regulatory approaches and how public values and human rights can be safeguarded and protected alongside these developments. At the same time, aspects of ADM and AI already play a role in some of the current proposals for regulatory measures, for example, in an act setting out the regulatory framework under the Digital Government Agenda. Civil society and academia, in turn, are concerned about the pitfalls and ethics of the growing use and importance of ADM and AI. Alongside existing actors, such as the Rathenau Institute, and Bits of Freedom, a Dutch Alliance for Artificial Intelligence was recently launched to enable multi-stakeholder discussion on the responsible development, deployment and use of AI.
Meanwhile, more and more cases of ADM are already in use in the Netherlands. Notable examples include a big data analysis system called System Risk Indication (SyRI), as well as several predictive policing initiatives run by the Dutch police. In addition, the private sector already offers a range of different ADM-based solutions, for example in the field of alternative dispute resolution and credit/risk scoring.
Political debates on aspects of automation – Government and Parliament
Unlike many other countries, no work has been done in the Netherlands on a national agenda specifically focussed on automated decision-making or Artificial Intelligence. In politics, the role of algorithms and ADM in society is being treated as part of a larger discussion on digital transformation.
Set out below are the parts of the current political debate that focus specifically on the use of ADM and that could have the greatest impact on individuals.
Dutch Digitalisation Strategy
The Nederlandse Digitaliseringsstrategie: Nederland Digitaal (Dutch Digitalsation Strategy) [NL 1] highlights the opportunities that the use of automated decision-making and Artificial Intelligence bring, in particular, in making all sorts of processes more efficient (the processes in question are not detailed in the report).
The strategy stresses the need to keep up with the enormous competition from other countries in the field, and therefore cooperation between the public and the private sector is encouraged. Such cooperation is, for example, given form in the Commit2Data program. [NL 2] This is a multi-year, national research and innovation programme built on the basis of a public-private partnership. It aims to further develop the use of Big Data around themes such as logistics, health and data handling. The sub-programme Verantwoorde Waardecreatie met Big Data (Responsible Value Creation with Big Data or VWData) focuses specifically on research into the responsible use of applied technical and societal Big Data solutions.
In the field of Artificial Intelligence, a similar innovation programme is envisaged with the aims of developing knowledge about the technologies behind AI, its application, and the human factor (ethics, behaviour and acceptance). Transparency and explainability of algorithms are set to be an important theme in the programme.
At the same time, the strategy warns about the risks involved in the use of ADM and AI, for example in terms of autonomy and equal treatment. An entire chapter of the strategy is therefore dedicated to the safeguarding of public values and human rights. The need for their inclusion in the development and use of data and algorithms is emphasised.
Agenda Digitale Overheid – Digital Government Agenda
As part of the larger digitisation strategy, the Dutch government published its Digital Government Agenda: NL DIGIbeter in July 2018. [NL 3] In the agenda, the government sets out how the digitisation of public administration at multiple administrative levels should progress. It acknowledges the increasing use and importance of ADM in public decision-making processes, service provision and governance—for example, in the process of issuing permits. It also encourages experiments in this area by governmental institutions, both on a national and sub-national level.
As in the broader Dutch Digitalsation Strategy, the agenda highlights the importance of the protection of constitutional rights and public values in cases where ADM is applied. It also stresses that even semi-automated decisions should comply with the principles of Dutch administrative law. In this light, many of the calls for action in the agenda are focussed on research into the consequences and ethics of ADM in inter alia administrative service provision.
Some of the research as set out in the Digital Government Agenda:
In August 2018, a research report instigated by the Dutch government—and conducted by Utrecht University—about the relationship between algorithms and fundamental rights was sent to the Dutch parliament. [NL 4] The report identifies problems that the growing importance of algorithms brings with respect to fundamental rights. These include the right to privacy, equality rights, freedom rights and procedural rights. The government was supposed to react to the report during the summer of 2018, but, as of November 2018, they have still not done so.
In June 2018, the government asked the Wetenschappelijke Raad voor het Regeringsbeleid (Scientific Council for Government Policy, WRR)—an independent advisory body on government policy whose members include prominent social scientists, economists, and legal scholars—to conduct research into the impact of Artificial Intelligence on public values. [NL 5] The request mentions the growing impact of AI, both in the public and private sector, and stresses the need for cross-domain research into the impact on public values. Besides the opportunities, the request also mentions specific risks associated with the use of AI, for example, in terms of discrimination regarding vulnerable groups.
The WRR has already published several reports that touch on the societal consequences of the use of ADM. These include the 2016 publication, “Big Data and Security Policies: Serving Security, Protecting Freedom” [NL 6], about the use of Big Data by the police, the judiciary and the intelligence services. It concluded that the existing regulatory framework should be upgraded significantly in order to provide sufficient protection of fundamental rights and safeguards against erroneous use. Specifically, the report concludes that the use of risk profiles and (semi-)automated decision-making should be regulated more tightly.
Lastly, the Digital Government Agenda mentions that the Wetenschappelijk onderzoeks- en documentatiecentrum (Research and Documentation Centre of the Ministry of Justice and Security or WODC) will carry out research on the regulation and legality of algorithms taking autonomous decisions. The research, planned for 2018, is supposed to focus on the societal consequences of ADM in the near future and any regulatory interventions that might be required. Results have not yet been published.
Dutch Council of State – Raad van State
On August 31, 2018, the Dutch Council of State—the highest advisory body on legislation and governance to both the government and parliament—published a critical assessment of the Dutch Digital Agenda and especially of its ADM provisions. [NL 7] According to the Council of State, the growing importance of ADM carries with it the risk that citizens cannot check which rules are being applied. In addition, it is no longer possible to determine whether the rules actually do what they are intended to do. Furthermore, citizens risk being profiled and confronted with decisions based on information of which the source is unknown. The agenda warns of the detrimental effect upon aspects of the Dutch constitution, of issues relating to the rule of law in general, and to the consequences for the position and protection of individuals in particular. As an antidote, the council, among other things, proposed that an administrative decision must contain explanations of which ADM processes (algorithms) have been used and what data has been taken from other administrative bodies. In addition, the council proposes that a human check must be allowed if a citizen objects to an automated administrative decision. This would be done in order to strengthen the position of citizens in automated and follow-up decision-making.
A reaction from the government to the assessment was sent to parliament on November 2, 2018. [NL 8] The government acknowledges the risks identified by the Council of State and mentions that the assessment is in line with actions and processes already initiated by the government.
The use of Artificial Intelligence in the judicial branch
While the Dutch judiciary has yet to implement ADM in any tangible form, steps are being taken to allow it in the near future. The Dutch Digitalisation Strategy has already asked for cooperation between the judicial branch and the scientific field to research ways in which Artificial Intelligence can be applied responsibly. On March 29, 2018, the Ministry of Justice and Security organised a round table discussion on the use of Artificial Intelligence in the legal field. [NL 9] Among the participants were scientists as well as delegates from civil society organisations and industry. In July 2018, the East Brabant District Court, together with Tilburg Law School, appointed a professor to the new special chair in Data Science in the Judiciary. This person will be involved in performing (small) experiments and pilots with AI and ADM at the District Court. [NL 10] The Minister of Legal Protection promised to send a letter to parliament in the autumn of 2018 on the possible meaning of ADM and AI for the judicial branch. As of November 2018, this letter has not been received.
Political debates on aspects of automation – Civil Society and Academia
Bits of Freedom
Bits of Freedom [NL 11] is a digital rights organisation focussed mainly on privacy and freedom of communication online. It strives to influence legislation and self-regulation, and to empower citizens and users by advancing awareness, use, and development of freedom-enhancing technologies. The organisation is very active in public and political debates on the use of ADM in, amongst other things, predictive policing, profiling and credit scoring. For example, they took part in the round table discussion on the use of AI in the judicial branch. [NL 12] More recently, Bits of Freedom ran several campaigns to raise awareness about the growing influence of ADM in modern day society. For example, they presented a Big Brother Award to the Dutch national police for their predictive policing initiatives. Furthermore, they co-initiated Heel Holland Transparant (All of Holland Transparent) [NL 13], an initiative aimed at illustrating the amount of data gathered by private scoring companies. They did this by making public the personal details of some well-known Dutch people where all the information originated from public sources.
De Kafkabrigade
De Kafkabrigade (The Kafka Brigade) [NL 14] tackle redundant and dysfunctional bureaucracy that prevents people from accessing the services they need and that constrains and frustrates public service staff. The Brigade has published a book called De Digitale Kooi (The Digital Cage) that illustrates the unwanted consequences of the increasing use of, among other things, automated decision-making mechanisms in public service.
Dutch Alliance for Artificial Intelligence
On October 11 2018, the Dutch Alliance for Artificial Intelligence (ALLAI) [NL 15] was unveiled to the public. The alliance was initiated by three Dutch members of the European High-Level Expert Group on AI: Catelijne Muller, Virginia Dignum and Aimee van Wijnsberghe. The aim of the alliance is to enable a multistakeholder discussion on the responsible development, deployment and use of AI. ALLAI focuses on six broad themes that cover AI’s advantages as well as the risks and challenges it brings: high quality AI; ethics of AI; social impact of AI; education and skills; rights and accountability, and AI for good.
Platform Bescherming Burgerrechten
Platform Bescherming Burgerrechten (Platform for the Protection of Civil Rights) [NL 16]
is a civil rights NGO consisting of a network of organisations, groups and individuals who join each other in striving to better guarantee and strengthen civil rights in the Netherlands. It focuses particularly on respect for privacy rights, physical integrity, digital autonomy and the right to personal control (and possession) of personal data. Among other things, Platform Bescherming Burgerrechten is involved in a coalition that initiated legal proceedings against the System Risk Indication (Systeem Risico Inventarisatie or SyRi—see ADM in Action).
Privacy First
Privacy First [NL 17] is a non-governmental organisation (NGO) committed to promoting and preserving the public’s right to privacy. The foundation has several focus areas (e.g. financial, online and medical privacy), some of which centre on the growing impact of ADM on society, for example through profiling. Privacy First is also involved in the coalition that initiated legal proceedings against the System Risk Indication (Systeem Risico Inventarisatie or SyRI—see ADM in Action).
Rathenau Institute
The Rathenau Institute [NL 18] is an independent knowledge institute that in recent times has published several influential reports on automated decision-making, Big Data and AI. These reports are frequently mentioned in political debates, for example in a debate that preceded the passing of the Digital Government Act. In 2017, the institute published a report called “Opwaarderen. Het borgen van publieke waarden in de digitale samenleving” (“Urgent Upgrade. Protect public values in our digitised society”) in which it concluded that digitisation challenges important public values and human rights such as privacy, equality, autonomy, and human dignity. The report warned that government, industry and society are not yet adequately equipped to deal with these challenges. Also in 2017, the Institute published a report on “Mensenrechten in het robottijdperk” (“Human rights in the robot age”). More recently, it published the “Doelgericht Digitaliseren” (“Decent digitisation”) report in 2018. This report included a collection of blog posts in which experts set out their views on decent digitisation, for example in terms of how we can stay in charge of algorithms. On the basis of these reports, the institute formulated four virtues that can help to deal better with digital technology: personalisation, modesty (i.e. awareness of the limits of digital technology), transparency, and responsibility.
Scientific research
As set out above, the political debate on aspects of automation has already prompted quite a number of reports and research. In the scientific field, the need to research the impact of automated decision-making across different sectors of Dutch society is acknowledged as well. Such a call can, for example, be found in the Nationale Wetenschapsagenda (Dutch National Research Agenda). [NL 19] Furthermore, automated decision-making is highlighted in multiple programme lines of the Digital Society Research Agenda by the Association of Dutch Universities’ (Vereniging van Universiteiten or VSNU). [NL 20]
Notable recent research includes a report [NL 21] that is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision-making, and which forms part of the groups’ research on fairness in automated decision-making. The report provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. Furthermore, a PhD thesis, on Automated Administrative Chain Decisions and Legal Protection, was recently defended at Tilburg University. [NL 22] The PhD candidate found that in most cases, administrative chain decisions regarding income tax filings or the granting of child benefits severely lacked transparency and contestability—thereby risking to infringe on fundamental rights as well as on administrative principles of good governance.
Regulatory and self-regulatory Measures
Digital Government Act
The proposal for the Wet Digitale Overheid (Digital Government Act) [NL 23] was submitted to parliament in June 2018. The act sets out the regulatory framework beneath the Digital Government Agenda, providing rules on, among other things, the power to impose certain (technical) standards in the electronic communication of the government; data and information security; the responsibility for the management of facilities and services within generic digital government infrastructure (GDI), and digital access to public services for citizens and businesses.
Data Processing Partnership Act
In September 2018, a public consultation was completed on a proposal for the Wet gegevensverwerking door samenwerkingsverbanden (Data Processing Partnership Act). [NL 24] The aim of the act is to provide a basis for public-private cooperation and to make collaboration between the two easier. This applies to the processing of data, specifically when this processing is used for surveillance or investigation purposes, for example to prevent crimes, or to detect welfare fraud.
Dutch Road Traffic Act 1994
An amendment of the Wegenverkeerswet 1994 (Dutch Road Traffic Act 1994) passed into law in September 2018. Upon obtaining a licence, this permits experiments with fully autonomous vehicles. [NL 25]
Code Good Digital Governance
As part of the Digital Government Agenda the presentation of a Code Goed Digitaal bestuur (Code Good Digital Governance) is envisaged for mid-2019. [NL 26] This code should aim to provide guidance and set out rules for the collection and use of data, and the use of new technologies, in the public space, for example in the context of smart city initiatives.
ADM in Action
ADM in alternative dispute resolution
A few private initiatives are available that offer a form of alternative dispute resolution, or arbitration, through a completely digitised procedure. [NL 27] The cases are handled solely by using digital and algorithm-based processes—the defendant never sees an actual judge. Automated decision-making plays a significant role in these procedures, and these private courts, sometimes mockingly named Robo-judge, are increasingly being used by debt collection companies. Some health insurance companies also apply them by imposing their use on their clients by including arbitration clauses in their terms of use. Recently, this application of digital, privatised dispute resolution has sparked controversy, particularly regarding the lack of due process and the non-transparent nature of such initiatives. The Minister of Legal Protection has been asked questions about this in parliament. [NL 28] In response, the minister commented mostly positive about digital alternative dispute resolution initiatives and downplayed the risks and detrimental effects.
Credit / risk scoring
An increasing number of private companies offer credit scoring services. [NL 29] These services are used by a variety of clients—such as health care insurance providers, energy companies, internet service providers and phone companies—to rate the creditworthiness of citizens. The credit scoring companies gather information on a large scale from a variety of sources to give an automated indication as to whether a potential customer can be trusted. The client of the credit scoring service can subsequently use ADM to decide whether, for example, a potential customer can have insurance or a phone subscription. These private credit scoring companies exist alongside an official and independent financial registration office called Bureau Krediet Registratie (Central Credit Registration Office or BKR). In most cases, the amount of data these companies collect far exceeds the amount available at the BKR.
Journalistic reporting
ADM in The Netherlands has also found its way into journalism. [NL 30] Several news outlets have implemented, or are in the process of implementing, ‘recommender systems’. These systems semi-automatically decide which articles are shown to each individual visitor or subscriber to a news website. Among these outlets are RTL Nieuws, Het Financieel Dagblad, NU.nl and the Dutch Broadcast Foundation (NOS). Most notable among these is a kiosk-like online platform called Blendle that enables users to read articles from multiple newspapers and magazines on a pay-per-view basis. It recently introduced a subscription model that provides subscribers with twenty tailored articles per day. Apart from a few articles that are hand-picked by editors, the selection of these articles is mainly algorithm-based and dependent on a variety of data points (e.g. what articles a user has previously clicked on).
Law enforcement initiatives – Predictive policing
At the moment, multiple initiatives are in operation in the Netherlands centred around the use of predictive, algorithm-based methods to anticipate and prevent crimes. Most notably the national Police has—after having run pilots in Amsterdam and The Hague—rolled out a programme called Criminaliteits Anticipatie Systeem (Crime Anticipation System or CAS), which they built themselves. [NL 31] [NL 32] The aim of this system is to predict where and when crimes, such as burglary and street robbery, will take place. The system does this by analysing a wide variety of data, such as historic crime rates, data from Statistics Netherlands, and information about recidivists (e.g. dates of birth and addresses), after which the likelihood of these crimes occurring is indicated in the form of a heat map.
Other examples of predictive policing initiatives and pilots are the development of RTI-geweld. This is a risk prediction instrument used to estimate the future risk of violence of all persons appearing in the police’s incident registration system. In addition, ProKID is a method that was introduced in 2013. It is aimed at identifying the risk of recidivism among twelve year olds who have previously been suspected of a criminal offence by the police.
Municipality-level projects
In recent times, on the lower administrative levels (especially in municipalities), a broad range of data-driven or algorithm-based initiatives have seen the light of day. It goes beyond the stretch of this report to give a detailed overview of all developments at this point, but over recent years many municipalities have, for example, launched smart city initiatives. These initiatives collect a broad range of data from a variety of sources and for a variety of reasons, such as improving safety in entertainment districts and crowd control, but also to regulate air quality and to solve mobility issues. An important development in this regard is the creation by a coalition of (larger) municipalities in collaboration with industry and scientists of the NL Smart City Strategie [NL 33] in January 2017.
ADM is also used in some municipalities to prevent and detect truancy and early school-leaving. This is done by using algorithms that help decide which students will be paid a visit by a school attendance officer. Similar initiatives exist to detect child abuse and/or domestic violence.
Other than using System Risk Indication (see below), some municipalities have also developed their own initiatives that revolve around the use of algorithms to detect welfare fraud. These programmes take into account data such as dates of birth, family composition, paid premiums and benefits history, as well as data from the Tax and Customs Administration, Land Registry and the Netherlands Vehicle Authority. Municipalities thus hope to increase the chances of identifying people committing welfare fraud.
An overview of initiatives can be found in the 2018 report Datagedreven sturing bij gemeenten (Data driven steering in municipalities) [NL 34], which was initiated by the Association of Netherlands Municipalities. The report urges municipalities to share knowledge, and encourages them to cooperate in the roll-out of new initiatives.
Social Welfare – SyRI
Systeem Risico Inventarisatie (System Risk Indication or SyRI) is a big data analysis system that runs under the auspices of the Ministry of Social Affairs and Employment. [NL 35] It is used on request by any of the so-called ‘cooperation associations’. These include state institutions such as the employee insurance provider (UWV), the tax office (De Belastingdienst), the social security bank (SVB) and the immigration authority (IND), as well as a few Dutch municipalities to detect wrongly collected social benefits and other abusive use of the social welfare state. The aim of the system is to combat and prevent the unlawful use or recourse to public money or social security institutions or other income-related state benefits. In order for SyRI to operate, data provided by a citizen (for example to file a tax return) is combined with data from a variety of other sources. An algorithm, involving a risk model with several unknown indicators, then determines whether a citizen should be flagged because of an increased risk of irregularities, or potential fraud.
In recent times, SyRI has increasingly attracted negative attention. The subject has led to questions to the Minister of Legal Protection by members of the Dutch House of Parliament. [NL 36] More importantly, a group of civil rights initiatives which gathered under the name Bij Voorbaat Verdacht (Suspected in Advance) recently started legal proceedings against the use of the software. [NL 37] A trial was set to start in the final months of 2018.
Gijs van Til LL.M. | Gijs van Til LL.M. is project researcher at the Institute for Information Law (IViR) of the University of Amsterdam. In this capacity, he is involved in several of the Institute‘s ongoing research projects, ranging from the future of copyright to the legal implications of the use of Artificial Intelligence and automated decision-making. During his studies in both Private and Information Law, Gijs gained profound knowledge of the Dutch and European legal system and developed his interest in the intersection of law and technology. A paper which he wrote in the course of his study about the relationship between copyright and search engines was published in a Dutch scientific journal. He wrote his master thesis about the proposed use of self-/coregulatory measures to tackle online disinformation. Besides his work at IViR, Gijs is as board member of Clinic, a law clinic providing free legal advice to individuals and start-ups on issues ranging from intellectual property to privacy. |