AlgorithmWatch’s demands for improving the AI Act

As policymakers are busy with shaping the AI Act, AlgorithmWatch has clear demands what should flow into the regulation so that it genuinely protects our fundamental rights.

Hans Reniers | Unsplash

Status of negotiations

Negotiations about the Draft Artificial Intelligence Act (AI Act) have taken off with full speed in the European Parliament and the Council of the EU. Today, we expect the IMCO-LIBE committees, who lead the negotiations in the Parliament, to disclose their draft report.

The AI Act has the potential to set meaningful safeguards against negative effects stemming from the use of AI systems. However, the Act still has a long way to go if it is to achieve that goal, as AlgorithmWatch has pointed it out in its initial position paper and together with over 120 civil society organizations in a joint statement. Over the past months and in close collaboration with civil society partners, AlgorithmWatch has been intensely working on proposals to shape the Act so that it lives up to its ambition: the protection of fundamental rights.

What’s at stake?

As a myriad of examples illustrate, AI – or better put automated decision-making (ADM) systems – may have discriminatory effects, infringe on our fundamental human rights, and exacerbate existing social injustices. Last year’s childcare benefits scandal in the Netherlands was a shocking example of how bad things can go when ADM systems are used without the necessary safeguards. The algorithmic system used discriminatory risk indicators like dual nationality or low income to create risk profiles. Authorities then began to claim money back from people who were singled out by the system without any evidence of actual crime. Tens of thousands of families became impoverished and over a thousand children were taken into foster care. In another case, as also reported by AlgorithmWatch, the employment agency in Austria was to deploy a program to evaluate the employability of unemployed people based on certain characteristics. It was found that the algorithm discriminated against women and people with disabilities.

Would the AI Act be a meaningful tool to protect us from such events? Well, in its current form it does not look too promising. In order to fortify its regulatory framework so that cases like the above can be prevented, AlgorithmWatch demands the following:

Our key demands

Context matters. The Act’s current approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance. On the one hand, we need to introduce consistent update mechanisms for all the risk categories in the AI Act. On the other hand, we can only consider the context of the deployment of the system if we put those deploying them – the users in the terminology of the Act – under more stringent obligations.

Fundamental rights impact assessment. Before a high-risk system is put into use, its user must be obliged to carry out a fundamental rights impact assessment to check whether the deployment of that system is in line with human rights standards. Deployments of AI systems by public authorities should always be accompanied by a fundamental rights impact assessment, regardless of the risk level of the system.

Public register for highly consequential AI use cases. Transparency is the first step towards accountability. The AI Act foresees an EU database in which all high-risk AI systems should be registered by the provider. In our view, this is an importan step but does not go far enough. Deploying the same AI system in different contexts may result in very different impacts on our lives – hence it is essential that all high-risk use cases be registered in the database, too. Furthermore, public authorities make decisions with crucial effects on our lives – typically without any alternative service provider we could turn to. Hence, due to this unique responsibility they bear, we demand that all public authority use cases of AI systems – independent from their risk level – shall be registered in the EU database.

For further details on AlgorithmWatch’s position regarding the EU database see the following detailed recommendations:

A broad definition of AI in the AI Act. Last year, the Slovenian Presidency of the Council has significantly narrowed down the definition of AI in the Act, which would drastically limit its scope. As AlgorithmWatch reported, under such a narrow definition, many systems that may have a significant impact on people’s lives would not necessarily fall under the Act. We argue that the risks that come with a system do not just depend on the complexity of the technology involved but on the context in which the system is used. A broad definition is therefore key.

Exemptions for “military purposes”  and national security must be revised. The AI Act excludes AI systems developed or used exclusively for military purposes from its scope of application. Moreover, in the Council, further exemptions regarding AI systems developed or used exclusively for national security purposes have been proposed. We demand to clearly define what military purposes entail and to unambiguously include the area of national security in the scope of the Act.

Ecological impact of AI systems must be accounted for. AI systems may have a significant impact on the environment due to their resource consumption and emissions. Based on initial findings in our SustAIn project, we call for environmental transparency requirements in the AI Act, that is, for obligations to disclose the environmental impact of all AI systems – which stems from the design, data management, training and from the underlying infrastructures of a system – regardless of risk level.

For further details on AlgorithmWatch’s position on the AI Act and ecological sustainability see the following detailed recommendations:

Closing loopholes for prohibitions. The AI Act would prohibit certain uses of AI systems that are not compatible with fundamental rights. While we appreciate this, we demand that loopholes are closed so as to make sure that the provisions in Article 5 can truly protect people from systems that violate their rights. In addition, further use cases would have to be prohibited.

Council and Parliament are expected to develop their respective positions on the AI Act over the course of this year. Together with our civil society partners, AlgorithmWatch will continue its efforts to make the Act into a powerful tool to protect our fundamental rights.

Read more on our policy & advocacy work on the Artificial Intelligence Act

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.