Automating Society


By Nicolas Kayser-Bril French lawmakers’ first confrontation with ADM came in 1978, when they introduced a law to ban it unconditionally. The new law granted citizens the right to access and modify th…

By Nicolas Kayser-Bril

French lawmakers’ first confrontation with ADM came in 1978, when they introduced a law to ban it unconditionally. The new law granted citizens the right to access and modify their personal data and required corporations and the government to request pre-authorisation before creating databases of people. At the same time, it prohibited credit scores—something which continues to this day.

Over time, it appears that the police felt only loosely bound by this law. According to a 2011 parliamentary review of police databases, over a quarter of them had not been legally authorised. [FR 1] They have since been legalised 1.

Recent legal changes have followed the same pattern. Following a law change in 2016, it became mandatory for all branches of government to make their algorithms transparent. However, journalists who reviewed three ministries discovered that none of them had complied with the regulation.

During the state of emergency, under President Hollande (2015-2017), laws were passed to allow the police to undertake automated mass-surveillance of internet activity. In addition, the Ministry of the Interior is currently looking at ways to link all CCTV footage to biometric files and automate the detection of ‘unusual’ behaviour. [FR 3] However, several watchdogs—some financed by the state—lobby for more transparency relating to how ADM is used.

Political debates on aspects of automation – Government and Parliament

AI strategy of the government

The French government’s current AI strategy follows the ‘AI for Humanity’ [FR 4] report, written by MP and mathematician Cédric Villani in March 2018, and adopted by President Macron the following month. [FR 5] However, the president chose not to take up most of the report’s recommendations, including a proposal to double the salaries for young academics working in AI.

Instead, Macron chose to adhere to the following guidelines: To focus the strategy on health, transport, security and the environment, and to double the number of students in AI (although no target date was set for this), and to act at the European level. In addition, he wants to simplify regulations involving AI experiments and create a €10 billion innovation fund—part of which would pay for work in AI.

In March 2017, President Hollande’s administration published its own AI strategy, but it was coordinated by a different branch of the government to the one Macron used. [FR 6] This multiplicity of AI reports will not help the French administration to know exactly what the current official strategy is.

National Digital Council

In 2011, President Sarkozy’s administration created a National Digital Council (Conseil National du Numérique or CNNum) which continues to this day. Council members are selected by the government and only have consultative powers. Historically, the CNNum has included high-profile entrepreneurs and think tank employees whose collective Twitter clout gives them an over-sized influence.

By its nature, the CNNum is resolutely pro-business, but it has also clashed with the government on issues concerning civil liberties. In particular, it opposed ADM with regard to state surveillance of citizens (see the Regulatory Measures, Mass Surveillance section for further details). In that case, it argued that predictive algorithms would reinforce social biases. [FR 7]

The CNNum did not write its own AI report, but it did contribute in large part to the government’s first AI strategy. This strategy highlighted the need for well-balanced human-computer interactions, and co-operation between all stakeholders before the deployment of ADM in companies.

After a clash with the government in December 2017—when the Minister for Digital Affairs refused to let one expert join the college—the CNNum resigned en masse. [FR 8] As a result, the government nominated a new set of experts who were more aligned with its opinions. Some commentators think that this episode might reduce the influence and relevance of the consultative body in the future.

Etalab – Entrepreneurs in the public interest

Etalab, the open data outlet of the French government, runs a programme that lets highly educated, tech-savvy youths work as “entrepreneurs in the public interest”. This means that they are embedded in the administration for a ten-month period to experiment with new ways of doing things and some of them have worked on ADM. [FR 9] However, the impact of this programme is hard to assess. For instance, one entrepreneur—who was embedded in the fraud-tracking service of the Ministry of Finance—said that his algorithm would not replace the legacy system. At this stage, it is unclear whether these public-sector entrepreneur posts will amount to anything more than glorified internships.

Political debate on aspects of automation – Civil Society and Academia


In late 2017, the CNNum (see above) tasked the Medialab at Science-Po, a research centre closely associated with digital activism, to study how algorithmic glitches ‘trouble’ the general public. AlgoGlitch [FR 10], which folded in August 2018, mostly ran workshops on the topic and its results remain purely qualitative.

Algo Transparency

Algo Transparency is an informal team of technologists who monitor recommended videos on YouTube—not just French content—and publish the results on the AlgoTransparency website. [FR 11] Funding and network support come, in part, from Data For Good
[FR 12], which is a French incubator of civic-tech projects financed partly by private ­sector foundations.


The New Generation Internet Foundation (Fondation Internet Nouvelle Génération, or FING) [FR 13] is a think tank, founded in 2000, which brings together large and medium-sized companies as well as public bodies. It organises conferences and runs the Internet Actu news website, which translates English language articles on ADM and publishes them in French.

On the issue of ADM (which it refers to as “systems”), FING advocates strongly for a measure of “mediation” which it defines as the ability for users to receive support, whether automated or human, whenever needed.

La Data en Clair

La Data en Clair (Data in the Clear) [FR 14] started in June 2018 and is an online magazine focused on the ethical aspects of Artificial Intelligence. It published several articles in June, suspended operations for a while, and resumed work in November 2018. [FR 15]

La Quadrature du Net

On the NGO side, La Quadrature du Net [FR 16] is dedicated to online freedom, but it only follows ADM issues at the French or European level when they encroach on individual freedoms such as smart cities, online censorship or predictive policing. However, this site does not consider ADM to be a key area of interest and it has never led any campaigns (e.g. petitions, public awareness campaigns) on the issue. La Quadrature du Net tends to focus on other topics, although it has fought on issues related to ADM indirectly. For example, it started a legal battle to oppose the creation of a biometric database of all French nationals—something which is a prerequisite for large-scale face recognition—but La Quadrature du Net ended up losing that battle. [FR 17]


The online news outlet, NEXT INpact [FR 18] is one of the very few French newsrooms to consistently follow issues related to ADM on the national and governmental level. Their journalists regularly use freedom of information requests to test transparency laws and they were responsible for the publication of some algorithms used by the state. However, they do not cover the private sector with the same intensity.

Regulatory and self-regulatory Measures

Algorithm transparency

Article L311-3-1 of the legal code of relations between the administration and the public [FR 19] states that any decision made using an algorithm must mention the fact that an algorithm was used. In addition, the rules defining the algorithm and what it does must be released upon request.

In June 2018, the French Supreme Court for administrative matters (Conseil d’Etat) stated that a decision based solely on an algorithmic system could only be legal if the algorithm and its inner workings could be explained entirely to the person affected by the decision. If this is not possible (because of national security concerns, for instance), then algorithmic decision-making cannot be used. [FR 20]

This piece of legislation was passed in 2016 and became law on September 1, 2017. A year later, journalists tried to find cases where this law had been applied, but they could not find any. [FR 21] The law authorising the Parcoursup system (see the ADM in Action section for details) contains a clause that freed it from complying with this regulation. [FR 22]

Autonomous driving

A law currently being debated in parliament aims to clarify responsibilities around autonomous driving. [FR 23] The stated goal is to allow tests of fully autonomous vehicles. In May 2018, the government announced that the necessary legislation will not be passed until 2022.

When signed into law, it will allow autonomous vehicles up to SAE (Society of Automotive Engineers) level 4 on all roads (high automation— i.e. where the system is in complete control of the vehicle and a human presence is not required). However, its application will be limited to specific conditions. [FR 24]

Up until now, car manufacturers have used ad-hoc authorisation to test their vehicles. [FR 25]

Computers and Freedom Law from 1978

The Computers and Freedom Law followed an outcry after a government agency planned to connect several databases and use social security numbers as unique identifiers. The result of the argument was the creation of the legal concept of “personal data”—and its protection—in France. It has been modified substantially over the years to keep up with technological changes, most notably in 2004 and in 2017 (in accordance with the GDPR).

Strict limitations on ADM

Article 2 of the Computer and Freedom Law [FR 26] states that no judicial decision “regarding human behaviour” can be made on the basis of an ADM that “gave a definition of the profile or personality” of the individual. The same applies to administrative and corporate decisions (this disposition now constitutes article 10 [FR 27] of the law, in a slightly watered-down version that contains several exceptions.).


The law states, in articles 15 and 16 [FR 28], that any algorithmic decision-making must be submitted to the CNIL (see under Oversight Mechanisms below). This same law allows, in chapter IV, anyone to access or request modification of his or her personal data stored by any administration or company (this now constitutes article 22 ff. [FR 29] However, mandatory declaration, which had replaced pre-authorisation a few months after the law was passed, was dropped in 2017).

Mass surveillance

A 2015 law relating to domestic intelligence allows the police and intelligence agencies to request that internet service providers deploy algorithmic systems to detect terrorist threats using “connection data” only (Article L851-3 of the Code for domestic security [FR 30]).

These deployments are regulated by a consultative body known as the National Commission for the Control of Intelligence Gathering Techniques (Commission Nationale de Contrôle des Techniques de Renseignement).

The Ministry of the Interior is required to produce a parliamentary technical report on the use of this measure before end of June 2020. Based on this, lawmakers will decide whether to renew this legal clause. [FR 31]

Oversight mechanisms

CNIL – France’s data protection authority

The 2016 Digital Republic Law states that the National Commission on Computers and Freedom (CNIL), a nominally independent institution financed by the Prime Minister’s Office, must concentrate its energies on the fast-changing digital environment.

There followed a series of forty-five seminars, discussions and debates which resulted in an 80-pages report [FR 32] published in late 2017. The report attempted to explain the issues under scrutiny, especially regarding ADM. Among the recommendations, which included better ethics education, the report also proposed the creation of a national algorithm audit platform. However, this particular recommendation has yet to be discussed in parliament.2 The gross income from these radars is probably slightly over €1 billion per year, making it one of the most visible ADM processes French citizens are likely to encounter.

ADM in Action

Automated processing of traffic offences

Automated processing of traffic offences started in 2003 with the deployment of about 50 radars across France. These now number approximately 4,500 and record over 17 million offences per year—and the trend is increasing.3 The gross income from these radars is probably slightly over €1 billion per year, making it one of the most visible ADM processes French citizens are likely to encounter.

In 2011, the Ministry of the Interior created a new government agency to manage radars and traffic offences—the National Agency for the Automated Processing of Offences (ANTAI). Somewhat revealingly, ANTAI does not mention anywhere in its annual report how it complies with article L311-3-1 of the legal code covering relations between the
civil service and the public. This relates back to the judgement, mentioned earlier, which requires transparency around the use of algorithms (see Algorithm transparency in the Regulatory Measures section for details). ANTAI did not answer a request for comment on this point. [FR 34]

Bob-Emploi – Matching unemployed people with jobs

In 2016, a 22-year-old caused a media sensation by claiming that he could use algorithms to match jobseekers with job vacancies and thereby reduce unemployment by 10%. As a result of his claim, he received €2.5m from public and private donors [FR 35] to finance a team of nine people. They also gained access to anonymised job seeker data and support from the national employment agency, a rare privilege. [FR 36]

However, the initiative, called Bob-Emploi (Bob-job), failed to reduce unemployment (as of June 2018, only 140,000 users had created an account). [FR 37]

‘Black boxes’ and mass surveillance

In 2017, some two years after the law that authorised ‘black boxes’ was passed [FR 38], the first one was deployed. This consisted of an algorithm—placed at the level of internet service providers—and used by the intelligence services to analyse internet traffic in order to detect terrorist threats (see legal dispositions above).

In its annual report, the oversight body for domestic surveillance (which only has consultative powers) stated that it was asked for an opinion on this system, but it provided no information regarding the scope or goals of the ‘black box’. [FR 39]

Credit scoring

The sale of an individual’s credit score is forbidden in France.5

CNIL prevents the full automation of credit scoring. One ruling demanded that companies must perform manual checks before someone’s name is added to a list of people not paying their dues. [FR 40] A file of people who have defaulted on their credit is maintained by the national bank.6

In 2012, the government pushed for the creation of a database containing all loans between €200 and €75,000, together with the names of the debtors. Ostensibly, this was to allow lenders to check whether a debtor was already heavily indebted. However, the law was nullified on privacy grounds by the supreme court in 2013.7

Dossier Médical – Personalised health files

Health is one of the four priority areas of the government’s AI strategy (see AI strategy of the government in the POLITICAL DEBATES ON AUTOMATION section for details).

In a 2018 report [FR 43], Inserm, a government outlet specialising in medical research, stated that the health sector did not use ADM as a whole. However, it added that a few test projects are underway locally or will start in 2019—one of these relates to decision support for the treatment of breast cancer while the other is concerned with complex ultrasound diagnosis.

This lack of activity highlights the failure of a project started in 2004 aimed at digitizing and centralising all personal health information in a “personalised health file” (DMP for Dossier Médical Personnel). The DMP project was supposed to allow ADM to operate in the health sector.8 The DMP was criticised by the French court of auditors in 2012 as ill-conceived and very expensive. [FR 45] As a result, it was rebranded as the “shared health file” (Dossier Médical Partagé). However, it is barely used (just 600,000 DMPs were created in 2017 [FR 46]).

Before this, in 2008, pharmacists started building their own central file of individual drug purchases and some 35 million people are now registered on the service. [FR 47] The data is used by pharmacists to check possible nefarious drug interactions. Laboratories can also access the database to perform drug recalls. However, the data cannot be sold to third parties.

Parcoursup – Selection of university students

In 2018, the French government pushed forward a reform whereby universities had the right to turn down applications from prospective students (previously, anyone with a high-school diploma had the right to enrol). The reform was enacted with the launch of an online tool called Parcoursup, which matched the wishes of students with available offerings.

On top of the drawbacks that plague many industrial-scale projects (numerous experts advised that it would be better to spread this reform over a longer period), Parcoursup was criticised for the opacity of its decision-making process.

When drafting the law that created Parcoursup, the government made sure to protect it from the legal obligation to tell users that decisions were made algorithmically and to make the algorithm transparent. Instead, the government published the source code of the matching section of the platform but remained silent on the sorting part.

It is believed that the sorting part uses personal data to help universities select the students who best fit certain courses of study. [FR 48]

1 A 2018 parliamentary report showed that the chaotic situation prior to 2018 has been fixed. However, at least 11 databases containing personal information—mostly compiled by the intelligence agencies—are not controlled by any organisation. It is unclear as to whether these databases are already being used to detect devious behaviour. However, the military plans to go down this route. See [FR 2]
3 Observatoire national interministériel de la sécurité routière, “La sécurité routière en France : Bilan de l‘accidentalité de l‘année 2017” [FR 33], 2018, p. 10 8, and “ANTAI, Rapport d’Activité 2017” [FR 34], July 2018.
5 We were unable to find any journalistic or parliamentary work on the topic. There are, however, companies such as Synapse who sell such services (installing a credit scoring mechanism, as opposed to selling scores).
7 AFP/CBanque Loi Hamon: “le Conseil constitutionnel rejette la création d’un fichier des crédits à la consommation”, March 13, 2014 [FR 41]. The Supreme Court’s ruling was worded in such a way that made experts assume that a file containing the credit histories of non-defaulting clients would never be possible under current legal conditions. See Cazenave, Frédéric, “Surendettement: le fichier positif définitivement enterré”, Le Monde, June 25, 2015. [FR 42]
8 The rationale of the personalised health file was made clear in a parliamentary report [FR 44] on the topic in January 2008, among others p. 28.



Nicolas Kayser-Bril, 32, is a Berlin-based, French-German data-driven journalist. He created the data journalism team at OWNI, a French news startup, in 2010, then co-founded and led the data journalism agency Journalism++ from 2011 to 2017. He now splits his time between teaching programming at HMKW (Berlin), helping fellow journalists in their data-driven investigations, consulting for various organizations and writing history books.
Published: January 29, 2019

Category: report