Applications like ChatGPT and Midjourney are all the rage now. Many people had a go with them in the last couple of months and were stunned by what these systems are capable of. It was almost impossible not to hear about them in the media, but the coverage often took a sensationalist turn by declaring that AI systems could get out of control and eventually make humanity become their lackey. Or is there a grain of truth in it and the concerns in regard to large generative systems, so-called General Purpose AI (GPAI), are indeed justified?
AlgorithmWatch considers these applications to be a huge step forward in the development of algorithmic systems. To discredit them as gigantic calculators wouldn’t do justice to their capabilities. Playing them down amounts to playing down the risks coming with them. GPAI pose a risk to our fundamental rights, they can discriminate against people and breach people's right to privacy. To train their models, Big Tech companies such as OpenAI are exploiting people in the global south, and the systems’ energy consumption is aggravating the climate crisis. At the same time, only very few global players have the financial means that are necessary to invest in the development of generative AI which facilitates monopolization.
It is already today that, with just a few clicks, these systems generate misleading and manipulative images that are virtually indistinguishable from photographs, produce a staggering amount of fake news very fast and at extremely low cost, and make people believe that they are dealing with a vivid mind equipped with intentions. All this has the potential to poison the sphere of public debate, that is the core of democratic decision-making, from which malicious actors might benefit now and in the future. The providers of systems that promote such developments must be held accountable.
Alarmist warnings that GPAI might put an end to humanity are devoid of any foundation, being motivated by a political strategy: „Horror scenarios as such are designed to distract us from the systems‘ dangerous impacts that have to be addressed immediately with adequate means,”saysMatthias Spielkamp, AlgorithmWatch’s founder and executive director. „They steer away our attention and energy to, after all we know, extremely unrealistic risks, one of them being the possibility that the systems get out of control.“
These scenarios are rooted in the so-called longtermism ideology that gives priority to improving the long-term future and neglects negative impacts on today’s world population. This is one of the reasons why AlgorithmWatch doesn’t support the Future of Life Institute’s open letter calling for a six months break in the development of these systems. The open letter’s suggestions are utterly unsuitable for containing the very real risks of GPAI.
„Companies like Microsoft, Google, or Meta/Facebook have invested billions in GPAI. Asking them for a sixth months break won’t protect the world from the negative effects the development of GPAI has already set in motion. Instead, such a break would contribute to reinforce a monopoly in defining what an ‘ethical’ AI system is,” says Angela Müller, Head of Policy and Advocacy at AlgorithmWatch and Head of AlgorithmWatch CH in Zurich. „It would only serve the interests of private people who are not democratically legitimized, which includes first and foremost their profit interests,” Angela Müller adds. “We have to make sure that the companies being responsible for developing and implementing these systems won’t shuffle off the responsibility for how the systems are used any longer. Now it's up to politics.
AlgorithmWatch joins the analysis of a group of international AI experts (see below for details) who have worked out suggestions for how the EU’s draft AI Act could ensure that the risk of harms caused by GPAI is adequately addressed and reduced. In December, the EU member states reached an agreement in the Council which said that the AI Act should specifically regulate so-called General Purpose AI (GPAI). GPAI include the currently intensely discussed Large Language Models (LLM). Right now, the EU Parliament is working on its positions for the draft AI Act. Subsequently, the Council, the Parliament, and the Commission deliberate on them in the so-called trilogue negotiations. After the three institutions will reach a final agreement, the law can be adopted.
We specifically demand that the European legislators – especially the German Government – and the European Parliament implement the following recommendations in the trilogue negotiations on the draft AI Act:
- GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
Note by AlgorithmWatch: The definition on which the EU member states‘ representatives agreed on before entering the trilogue negotiation phase says: “’general purpose AI system’ means an AI system that - irrespective of how it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.” (https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf)
- GPAI models carry inherent risks and have caused demonstrated and wide-ranging
harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
- GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
Note by AlgorithmWatch: Businesses as well as authorities integrate GPAI systems that were not developed by themselves and are not being controlled by them via software interfaces into products and services that are offered to people or applied to them. These businesses and authorities are also responsible for the employment of their systems. This doesn’t mean that the companies having developed the systems aren’t accountable for them anymore.
- Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
- Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardized documentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.
The five considerations to guide the regulation of ‘General Purpose AI’ in the EU’s AI Act were worked out by:
- Dr. Timnit Gebru, Founder and Executive Director, Distributed AI Research Institute (email@example.com)
- Dr. Alex Hanna, Research Director, Distributed AI Research Institute (firstname.lastname@example.org)
- Amba Kak, Executive Director, AI Now Institute; Senior Research Scholar at Northeastern Khoury College of Computer Science (email@example.com)
- Dr. Sarah Myers West, Managing Director, AI Now Institute (firstname.lastname@example.org)
- Maximilian Gahntz, Senior Policy Researcher, Mozilla Foundation (email@example.com)
- Irene Solaiman, Policy Director, Hugging Face (firstname.lastname@example.org)
- Dr. Mehtab Khan, Associate Research Scholar at Yale Information Society Project, (email@example.com)
- Dr. Zeerak Talat, Independent researcher (Computer science and artificial intelligence), (firstname.lastname@example.org)
The considerations were signed by 55 other experts and nine organizations – among them AlgorithmWatch. Download the PDF document: