Final EU negotiations: we need an AI Act that puts people first

As the final stage of negotiations on the Artificial Intelligence Act (AI Act) enters AlgorithmWatch together with 149 civil society organisations call on EU institutions to improve the regulation. In our civil society statement we spell out clear suggestions how the regulation should be changed in order to protect people and our human rights effectively.

Nikolett Aszódi
Policy & Advocacy Manager

In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare,education and employment.

Without strong regulation, companies and governments will continue to use AI systems that exacerbatemass surveillance, structural discrimination, centralised power of large technology companies,unaccountable public decision-making and environmental damage.

We call on EU institutions to ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:

1. Empower affected people with a framework of accountability, transparency, accessibility and redress

2. Draw limits on harmful and discrimiatory surveillance by national security, law eforcement and migration authorities

Increasingly AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. Such systems disproportionately target already marginalised communities, undermine legal and procedural rights, as well as contributing to mass surveillance. When AI systems are deployed in the context of law enforcement, security and migration control, there is an even greater risk of harm, and violations of fundamental rights and the rule of law. To maintain public oversight and prevent harm, the EU AI Act must include:

3. Push back on Big Tech lobbying: remove loopholes that undermine the regulation

The EU AI Act must set clear and legally-certain standards of application if the legislation is to be effectively enforced. The legislation must uphold an objective process to determine which systems are high-risk, and remove any ‘additional layer’ added to the high-risk classification process. Such a layer would allow AI developers, without accountability or oversight, to decide whether or not their systems pose a ‘significant’ enough risk to warrant legal scrutiny under the Regulation. A discretionary risk classification process risks undermining the entire AI Act, shifting to self-regulation, posing insurmountable challenges for enforcement and harmonisation, and incentivising larger companies to under-classify their own AI systems.

Negotiators of the AI Act must not give in to lobbying efforts of large tech companies seeking to circumvent regulation for financial interest. The EU AI Act must:

Drafted by:

  1. European Digital Rights (EDRi)
  2. Access Now
  3. Algorithm Watch
  4. Amnesty International
  5. Bits of Freedom
  6. Electronic Frontier Norway (EFN)
  7. European Center for Not-for-Profit Law, (ECNL)
  8. European Disability Forum (EDF)
  9. Fair Trials
  10. Hermes Center
  11. Irish Council for Civil Liberties (ICCL)
  12. Panoptykon Foundation
  13. Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)

Read more on our policy & advocacy work on the Artificial Intelligence Act