AlgorithmWatch demands the regulation of General Purpose AI in the AI Act

There was a lot of speculation to hear recently about large language models being a doom’s bell. This is just a red herring that distracts from the real harm people suffer from such models. To prevent an escalation of harmful effects, AlgorithmWatch calls on the German Government and the EU to implement an array of effective precautions.

Matthias Spielkamp
Executive Director, Co-Founder & Shareholder

Applications like ChatGPT and Midjourney are all the rage now. Many people had a go with them in the last couple of months and were stunned by what these systems are capable of. It was almost impossible not to hear about them in the media, but the coverage often took a sensationalist turn by declaring that AI systems could get out of control and eventually make humanity become their lackey. Or is there a grain of truth in it and the concerns in regard to large generative systems, so-called General Purpose AI (GPAI), are indeed justified?

AlgorithmWatch considers these applications to be a huge step forward in the development of algorithmic systems. To discredit them as gigantic calculators wouldn’t do justice to their capabilities. Playing them down amounts to playing down the risks coming with them. GPAI pose a risk to our fundamental rights, they can discriminate against people and breach people's right to privacy. To train their models, Big Tech companies such as OpenAI are exploiting people in the global south, and the systems’ energy consumption is aggravating the climate crisis.  At the same time, only very few global players have the financial means that are necessary to invest in the development of generative AI which facilitates monopolization.

It is already today that, with just a few clicks, these systems generate misleading and manipulative images that are virtually indistinguishable from photographs, produce a staggering amount of fake news very fast and at extremely low cost, and make people believe that they are dealing with a vivid mind equipped with intentions. All this has the potential to poison the sphere of public debate, that is the core of democratic decision-making, from which malicious actors might benefit now and in the future. The providers of systems that promote such developments must be held accountable.

Alarmist warnings that GPAI might put an end to humanity are devoid of any foundation, being motivated by a political strategy: „Horror scenarios as such are designed to distract us from the systems‘ dangerous impacts that have to be addressed immediately with adequate means,”saysMatthias Spielkamp, AlgorithmWatch’s founder and executive director. „They steer away our attention and energy to, after all we know, extremely unrealistic risks, one of them being the possibility that the systems get out of control.“

These scenarios are rooted in the so-called longtermism ideology that gives priority to improving the long-term future and neglects negative impacts on today’s world population. This is one of the reasons why AlgorithmWatch doesn’t support the Future of Life Institute’s open letter calling for a six months break in the development of these systems. The open letter’s suggestions are utterly unsuitable for containing the very real risks of GPAI. 

„Companies like Microsoft, Google, or Meta/Facebook have invested billions in GPAI. Asking them for a sixth months break won’t protect the world from the negative effects the development of GPAI has already set in motion. Instead, such a break would contribute to reinforce a monopoly in defining what an ‘ethical’ AI system is,” says Angela Müller, Head of Policy and Advocacy at AlgorithmWatch and Head of AlgorithmWatch CH in Zurich. „It would only serve the interests of private people who are not democratically legitimized, which includes first and foremost their profit interests,” Angela Müller adds. “We have to make sure that the companies being responsible for developing and implementing these systems won’t shuffle off the responsibility for how the systems are used any longer. Now it's up to politics.

AlgorithmWatch joins the analysis of a group of international AI experts (see below for details) who have worked out suggestions for how the EU’s draft AI Act could ensure that the risk of harms caused by GPAI is adequately addressed and reduced. In December, the EU member states reached an agreement in the Council which said that the AI Act should specifically regulate so-called General Purpose AI (GPAI). GPAI include the currently intensely discussed Large Language Models (LLM). Right now, the EU Parliament is working on its positions for the draft AI Act. Subsequently, the Council, the Parliament, and the Commission deliberate on them in the so-called trilogue negotiations. After the three institutions will reach a final agreement, the law can be adopted.

We specifically demand that the European legislators – especially the German Government – and the European Parliament implement the following recommendations in the trilogue negotiations on the draft AI Act:

Note by AlgorithmWatch: The definition on which the EU member states‘ representatives agreed on before entering the trilogue negotiation phase says: “general purpose AI system’ means an AI system that - irrespective of how it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.” (https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf)

Note by AlgorithmWatch: Businesses as well as authorities integrate GPAI systems that were not developed by themselves and are not being controlled by them via software interfaces into products and services that are offered to people or applied to them. These businesses and authorities are also responsible for the employment of their systems. This doesn’t mean that the companies having developed the systems aren’t accountable for them anymore.

The five considerations to guide the regulation of ‘General Purpose AI’ in the EU’s AI Act were worked out by:

The considerations were signed by 55 other experts and nine organizations – among them AlgorithmWatch. Download the PDF document:

Read more on our policy & advocacy work on the Artificial Intelligence Act