Op-Ed

Generative AI must be neither the stowaway nor the gravedigger of the AI Act

Apparently, adoption of the AI Act as a whole is at risk because the EU Council and Parliament are unable to reach a common position on generative AI, with some Member States wanting to exempt generative AI from any kind of regulation. This is highly irresponsible, as it threatens effective prevention of harms caused by AI-driven systems in general.

If you want to learn more about our work on Generative AI, General Purpose AI and the AI Act, please get in touch with

Angela Müller
Angela Müller
Head of Policy & Advocacy | Executive Director AlgorithmWatch CH
Matthias Spielkamp
Executive Director, Co-Founder & Shareholder

Since the release of ChatGPT to the public, EU institutions have been working at warp speed to respond legislatively to the rapid proliferation of generative AI systems from OpenAI, Google, Meta, Anthropic, Aleph Alpha and others. The goal: To include such systems in the scope of the proposed AI Act, a landmark law that has been in the works for three years. It was to be finalised in December in the EU law-making process called the trilogue negotiations, where the EU Commission, Parliament and the Council of Member States have to agree on a compromise text.

Reportedly, though, negotiations recently broke down when the Council and the Parliament were unable to align on a common position regarding generative AI. This not only places effective prevention of the harms of generative AI in jeopardy, but of AI-driven systems in general. EU institutions have to get their act together, lest they risk leaving almost 500 million people unprotected in the face of AI’s well-known hazards.

In their role of building compromise, the current Spanish Presidency of the EU has tried to create a middle ground between the hands-off approach of the Council and the Parliament’s earlier position to establish shared ground rules for all generative AI systems, which the Council calls general purpose AI (GPAI), the Parliament ‘foundation models’. In essence, however, the Spanish proposal is leaning towards the Council’s line: Only GPAI systems that are high-risk would fall under the more stringent provisions of the AI Act, all other systems would merely be subject to basic transparency obligations. The big question thus is: When is a foundation model high risk?

According to the Spanish Presidency, so-called ‘high-impact’ foundation models are high-risk, that is “any foundation model trained with large amounts of data and with advanced complexity, capabilities and performance well above the average for foundation models, which can disseminate systemic risks along the value chain, regardless of whether they are integrated or not in a high-risk system”. This qualification is problematic for at least five reasons.

First, it introduces considerable legal uncertainty. Criteria such as “above average”, “large amount of data”, “advanced complexity, capabilities and performance” are imprecise proxies to define high impact on society.

This leads to the second problem: Where exactly is society in all of this? The suggested criteria to determine whether a model is high impact or not refer to compute or the amount of data used, in other words, parameters that describe the foundational model but not its impact on society, safety or fundamental rights. Different criteria or benchmarks are needed to identify the potential or foreseeable impact on society, but to be able to define such criteria the Commission and AI Office should not rely on technical expertise alone but crucially depend on expertise from the social sciences and humanities, civil society organisations and representatives of affected communities, too.

The third problem is the unsubstantiated assumption that the source of systemic risks is the fact that a model is trained on a lot of data and has advanced complexity and performance. However, “large” is not necessarily “unsafe” – in fact, a model may become safer via advanced performance and training on large amounts of data, together with development processes that emphasise lawful and bias-mitigated training data and the inclusion of fundamental rights and societal perspectives. And “average” or “small-scale” is not necessarily “safe” – for example, an ill-intentioned user can create a lot of harm with an average model.

There is another, fourth fundamental flaw with the suggestion: the strong focus on responsible behaviour of providers and lack of a proper division of responsibility with ‘downstream’ users, meaning professional users building their applications on top of the providers of the large models. Even worse, according to the Presidency position, providers could dodge responsibility for the way their models are used downstream in their terms of use, as long as they make reasonable efforts to detect abuse – which, given the potential scale of adoption, will likely end up as a paper tiger and provide an easy way out for developing companies, releasing them from accountability in concrete cases.

Fifth, the proposal postpones the obligation to monitor for systemic risks to some undetermined moment in the future when foundation models become even more powerful. This focus on future risk diverts attention from the fundamental concerns foundation models and GPAI systems pose today, such as the unchecked use of publicly collected private and personal data, the reinforcement of structural inequality experienced by historically disadvantaged, marginalised, vulnerable, or otherwise oppressed communities, the exploitation of data workers, and these systems’ impact on the environment.

There are, of course, good arguments to be made to differentiate obligations according to the level of impact. But the Spanish Presidency text itself is pointing to a viable alternative, namely to distinguish between responsibilities along the value chain and assign responsibilities for the impact that the different actors along the value chain create. The development and deployment of foundation models involves a complex value chain of actors. In many cases, the responsibility for the safety and compliance with fundamental rights and the obligations of the AI Act cannot be allocated solely to either the provider – who makes critical decisions about the training data, parameters, training of the model, labour conditions, ecological footprint etc. – or the deployer – who determines how the model is used in a concrete context.

To close the resulting accountability gap, there is a need to spell out shared responsibility of developers and deployers. On this point of shared responsibility, the Spanish proposal suffers from a lack of clarity and the EU institutions should explicitly define shared responsibility of developers and deployers, each in their own sphere of influence. For example, deployers are not supposed to be part of the proposed forum of cooperation with the developers of high impact foundation models to discuss best practices and draw up voluntary codes of conduct. They should be.

Last but not least, there is a blatant absence within these cooperative endeavours: That of regulators, civil society organisations, representatives of affected groups and those researchers and red-testers whose job it apparently will be to flag potential risk areas and vulnerabilities. Their exclusion undermines the legitimacy and usefulness of the AI Act itself. Right now, there appears to be a willingness of large Member States, first and foremost Germany, France and Italy, to let the AI Act negotiations collapse on the final stretch, presumably to protect European companies like Aleph Alpha and Mistral from what the companies lambaste as overregulation. But any presumed contradiction between protection of individuals rights and democracy on the one hand, and innovation on the other is a false dilemma. Given that there are good proposals on the table to reconcile these goals, letting the AI Act go bust is not just entirely unnecessary. It is reckless.

This Op-Ed is based on the recommendations developed during an experts' workshop by AlgorithmWatch and the AI, Media, and Democracy Lab of the University of Amsterdam in September 2023.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.