EU policy makers: Protect people’s rights, don’t narrow down the scope of the AI Act!

EU Member States are pushing for a definition of Artificial Intelligence (AI) in the proposed AI Act that would dramatically limit its scope and exclude many systems that already have stark consequences for people’s fundamental rights and life prospects. We demand the European Council change its course, and call on the European Parliament to defend a position that puts people’s rights first instead of turning the AI Act into a paper tiger.

As reported today by AlgorithmWatch, the European Council, which represents the Union’s Member States, is suggesting to drastically narrow the definition of AI from the one proposed in the initial draft AI Act published in April 2021. At present, Article 3 of the Act defines an Artificial Intelligence system as software that generates an output for a given objective, provided it uses “machine learning approaches”, “logic programming”, “expert systems” or “statistical approaches”, among other techniques (full list in Annex I).

Under this apparently broad definition, many systems would be considered “AI”, from neural networks to more rudimentary statistical formulas that produce outputs. Importantly, this means that the current definition would also cover software-based automated systems which have far-reaching consequences for people, even if they are not based on complex Machine Learning technologies. In the EU, such systems are used by public authorities to manage services from welfare payments to unemployment training, and by private companies to offer products such as scoring or profiling people for their creditworthiness. The deployment of these systems is growing, as has been documented in AlgorithmWatch’s Automating Society reports and journalistic reporting.      

The goal of the EU adopting legislation on AI should be to create a legal environment that benefits people. In doing so, legislators must include any and every system that might have an impact on fundamental rights. In conflict with civil society requests and recommendations, the AI Act takes a risk-based approach, and mostly sets requirements for a limited list of so-called high-risk systems (full list in Annex III). Such an approach can only be effective if it uses a broad definition of AI, like the definition in the Commission’s original proposal appeared to be. 

Without such a broad definition, the AI Act cannot adequately address all systems which pose risks to fundamental rights and will create dangerous loopholes. Moreover, by excluding ‘simpler’, rule-based systems and restricting the scope to Machine Learning and other advanced techniques, the AI Act would disincentivise the use of cutting-edge approaches to AI by creating loopholes for these simpler applications. All of this could undermine the European Union’s stated objective of not only promoting AI uptake but also protecting fundamental rights. 

In light of this objective, the impact automated systems have on people is what counts—not the specific type of technology causing the impact. This attempt to narrow the definition of AI systems threatens to derail the entire approach of protecting EU citizens’ rights, autonomy and life prospects in a world increasingly pervaded by complex computational systems and runs counter to the Commission’s, Parliament’s and Member States’ pledges to put people’s rights over company profits. It would also undermine the goal of creating accountability for public authorities and private companies and of enhancing their trustworthiness when they use new technologies.

Access Now and AlgorithmWatch demand that the European Council revise course and clarify that every highly consequential, software-based automated system is covered by the AI Act’s definition of Artificial Intelligence, and we call on the European Parliament to resist attempts to turn the AI Act into a mere window-dressing exercise.

Read more on our policy & advocacy work on the Artificial Intelligence Act

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.