EU Commission publishes white paper on AI regulation 20 days before schedule, forgets regulation

In a set of documents published today, the European Commission unveiled its strategy for “Europe's digital future”. Despite some promising regulatory proposals, the Berlaymont’s program for automated systems fails to seriously appreciate the risks.

In July 2019, the then president-elect of the European Commission, Ursula von der Leyen, announced plans to “regulate Artificial Intelligence” within 100 days of taking office. It is now day 80 and, short of regulation, she presented a white paper which shall inform legislative action in the coming years.

In line with previous communications of the Commission, the white paper enthusiastically shares the vision of the industry on the topic. Quoting abundantly from CapGemini, McKinsey and other consultancies, the paper’s first part states that automated systems will help tackle the climate crisis (page 5), improve education (6), provide security (2) and even boost the lifetime of your electrical appliances (2). The Commission puts forth several “actions” to promote the sector’s development accordingly, such as support for poaching engineers from abroad (7) and the development of bespoke research centers (6).

Future regulation

The second part of the white paper deals with possible regulation. In line with the proposals of Germany’s Datenethikkommission (AlgorithmWatch reported in October), the Commission plans to divide automated systems into categories of risk. In a slightly tautological paragraph, the authors of the white paper explain that applications that carry “significant risks” in sectors where “significant risks can be expected to occur” are to be considered “high-risk” (17). The data sets used to train the algorithms of such systems should be diverse enough to avoid discriminatory outcomes, and they should be accessible to regulatory authorities, the Commission wrote. In some cases, automated systems might even have be to re-trained with new, European data prior to their introduction in the EU.

The authors of the white paper recognized that enforcing current legislation on algorithms would be especially difficult, due to the “black box” nature of most of them (14). They plan to encourage the development of public or private testing centers that will assess whether automated systems comply with existing legislation (25), much like vehicle inspection is carried out by private garages.

Fostering trust

For the Commission, regulation is necessary only inasmuch as it can foster trust in Artificial Intelligence and encourage adoption by European consumers (9). That automated systems might carry risks in themselves and should be reined in or prohibited does not seem to feature in the Commission’s toolbox. The white paper eschews any discussion of automated weapons, for instance, and only pays lip service to the CO2 emitted by data centers and their gargantuan energy needs.

On real-time face recognition, for instance, the Commission discusses the possibility of a ban in a draft version of the white paper. In the final version, the Commission sees no need for a moratorium on what is now dubbed “remote biometric identification,” but it should only be deployed provided there is “substantial public interest” (22).

Throughout the white paper and the accompanying documents, the Commission considers automated systems as opportunities to harness, with minor problems that can be corrected. During the press conference on 19 February, Commissioner Margrethe Vestager repeated that there was an “urgency” to “make the most of [AI]”. That automated systems could systematically alter the balance of power in favor of the powerful, as several scholars have made clear1, did not make it into the white paper.

Data fetishism

In a whiz of data fetishism, von der Leyen wrote in an op-ed that "85% of the information we [produced was] left unused" and that "this [needed] to change". But more data is not always better, especially for the individuals whose data is being used. The GDPR calls for data minimization, after all.

Mentioning her background as a medical student, she added that she “saw first-hand how tech could save lives”. But that’s only half the story. Technology, in medicine as in other fields, is not much use in itself. In the 1680s, the first microscope user documented how bacteria swam in groundwater,2 but it took over two hundred years for pasteurization to become widespread and dramatically improve sanitation. The technology had been there for a long time. Only the willingness to use it to save lives was lacking.

The same could happen with automated systems. The Commission seems to consider the benefits of automated systems to be real and any risk to be “potential” (22). Their reckless deployment, combined with a sense of urgency to “make the most of [them]”, might overwhelm enforcement agencies that are already at a loss.


1 See for instance Dave, K., Systemic Algorithmic Harms.

2 On the fascinating life of the first microscope-builder, Antoni Van Leeuwenhoek, see: Gest, H. (2004). The discovery of microorganisms by Robert Hooke and Antoni Van Leeuwenhoek, fellows of the Royal Society. Notes and records of the Royal Society of London, 58(2), 187-201.

Nicolas Kayser-Bril

Reporter

Photo: Julia Bornkessel, CC BY 4.0
Nicolas is data journalist and working for AlgorithmWatch as a reporter. He pioneered new forms of journalism in France and in Europe and is one of the leading experts on data journalism. He regularly speaks at international conferences, teaches journalism in French journalism schools and gives training sessions in newsrooms. A self-taught journalist and developer (and a graduate in Economics), he started by doing small interactive, data-driven applications for Le Monde in Paris in 2009. He then built the data journalism team at OWNI in 2010 before co-founding and managed Journalism++ from 2011 to 2017. Nicolas is also one of the main contributors to the Datajournalism Handbook, the reference book for the popularization of data journalism worldwide.