On 21 April 2021, the European Commission published the draft AI Act, a major piece of legislation that should regulate automated systems for the next decade. The definition of Artificial Intelligence in the text was remarkably broad.
Its article 3 states that an Artificial Intelligence system is a software that generates an output in a given objective, provided it uses “machine learning approaches”, “logic programming”, “expert systems” or “statistical approaches”, among other techniques (the full list is in Annex I).
Under such a definition, many systems would have been considered “AI”, from neural networks to an Excel sheet that uses some sort of statistical formula to produce an output.
Applying the brakes
During the summer, doubts appeared over the definition. In a conversation with high-level operators of automated systems in the public sector, European Commission staff provided a new interpretation of the definition. They explained that AI systems were only those where humans did not influence the rules in action. Expert systems were not AI, they said, seemingly contradicting the draft AI Act.
AlgorithmWatch asked the Commission for details. While our precise questions were left unanswered, a press officer explained in October that AI was when “a mathematical or algorithmic specification is intractable”. “The proposal does not intend to introduce a comprehensive regulation of software systems or automated systems in general”, he added.
AI or not AI
We asked in particular about the AMS algorithm in Austria. It was developed by the country’s employment agency to give an “employability score” to unemployed people in order to decide whether or not to give them access to professional training. The score was calculated by processing a dozen variables such as gender, age and nationality, each associated with a weight. It also discriminated against women, as we reported in 2019, and has since been suspended.
The Commission spokesperson would not give any final opinion on the AMS algorithm but he said that, given that the weights of each variable were obtained by running a statistical regression on historical data, they were not human-made and therefore fell under the definition of AI. (He added that the algorithm was already covered by the GDPR because it uses personal data.)
But other automated systems would not fall under this definition. The French welfare management agency, CNAF, calculates payouts to beneficiaries using an aggregation of legacy COBOL code running over millions of lines. Because the code implements human-made rules, it would not be AI according to the Commission’s revised definition. The absence of Machine Learning does not make the system more transparent or reliable. In April, AlgorithmWatch revealed that it could automatically create illegitimate debt for beneficiaries.
The European Council, which represents Member States and is independent from the Commission, seems to be in accord with this narrow view of AI. In a document revealed by Politico on 19 November, many Member States expressed a desire for a drastic reduction of the definition’s scope.
In a new document, which was shared with AlgorithmWatch on 22 November, the Slovenian presidency of the Council offered a “compromise”. There, it argues that the AI Act should only cover “learning systems” in the sense of machine learning. Even the AMS algorithm would fall outside of the scope of the AI Act under such a definition.
This is not the final position of the Council, which will again be discussed with the Parliament and Commission (known as “trilogue negotiations” in the EU institutions' jargon). But the shifting evolution of the definition of AI shows that many, in national capitals and in Brussels, would like to reduce the scope of the Act.
Some welcomed this move under the mistaken assumption that rule-based systems cannot qualify as Artificial Intelligence. But the definition of AI in academia – not to speak of common usage – is famously blurry. Herbert Simon, who coined the phrase ‘Artificial Intelligence’, saw the field as an attempt to replicate human heuristics, or rules of thumb. This “psychological AI”, as German psychologist Gerd Gigerenzer calls it, is rule-based and does not involve any Machine Learning.
The core of the AI Act lies in its risk-based approach. AI systems are categorized in groups, ranging from high to low risk. Each group is subject to very different rules, going from total bans to an absence of oversight. According to legal scholars Michael Veale and Frederik Zuiderveen Borgesius, operators of AI systems will have to follow the rules laid out by private certification firms.
The AI Act will probably create a huge market for certification and alter the ground rules for automated systems, as did the GDPR for personal data. For operators, and ultimately for citizens, whether or not a system is “AI” in the sense of the Act will determine how many duties and rights they have regarding its transparency and accountability.
It is also unclear which systems use Machine Learning. The French welfare agency CNAF uses an algorithm to give a “risk score” to beneficiaries (officially to detect those who might need help, more likely as a tool to launch controls). Whether or not this algorithm uses a “learning system” as defined by the Council is unknown.
The debate, however, is far from over. For Daniel Leufer, policy officer at Access Now, a digital rights organization, the narrow definition of AI is at odds with the stated goal of promoting AI uptake. Defining AI as Machine Learning might push companies to avoid the technology in favor of expert systems.
The AI Act should become law in 2024. What it will regulate remains as unclear as ever.
Edited 24 Nov 11:00 to clarify the academic definition of AI.