AI Safety Summit

Missed Opportunities to Address Real Risks

The UK did not need to throw its full weight behind the Frontier Risks narrative - there are other approaches it could have taken.

Simon Dawson / No 10 Downing Street - not edited (CC BY-NC-ND 2.0)

Oliver Marsh
Project Lead "Auditing Algorithms for Systemic Risks"

Discussions about technology are often discussions about power. The famous phrase of the supercomputer HAL in 2001: A Space Odyssey – “I’m afraid I can’t do that, Dave” – evocatively captures the feeling of powerlessness of a human faced with computerized decision-making. HAL is science fiction - but for many groups such powerlessness is a fact. Organizations including AlgorithmWatch have tracked ways in which technologies make harmful decisions about welfare payments, credit scores, police reports, students' education, and numerous other facets which impact on people’s lives – often hitting those with little power to fight back. Vulnerable groups are also impacted by decisions in how technologies are developed. These include workers in Kenya who were paid $1.32 and $2 per hour to label toxic – even traumatizing- content for ChatGPT; or the residents of Quilicura in Chile, where a Google data center risks exacerbating dangers of drought. Those affected by these decisions have very little power to protect themselves, to ask what is happening, or to challenge decisions made about their lives. This powerlessness is – or should be – a central question for discussions of safe, ethical, and fair AI.

Appealing sci-fi catastrophes

Governance, policy, and regulation can be tools for better distributing power. Last week the UK hosted an AI Safety Summit, bringing together a range of governmental and non-governmental actors. It explicitly focused on hypothetical futures – “frontiers” – including catastrophic, even science-fiction-style risks. Considering how to address potential novel risks around technology is not, in and of itself, a bad thing.  But there are very important questions of who gets attention, and consequently who gets power. We argue that this emphasis on frontier risks is disappointing given the power the UK has, as an influential player on the world stage, to shape narratives around AI – and misses other opportunities for innovative problem-solving around the risks of AI.

Michelle Donelan, the UK Secretary of State for Science, Innovation and Technology, argued on the Politico Tech podcast that the Summit’s focus was needed because in the landscape of global discussions “there hasn’t been in-depth work… solely on the Frontier.” But when one looks to work on “AI Safety” there are a swathe of well-resourced organizations focused on frontier and catastrophic risks, often connected leading technologists and technology companies. Such organizations already have numerous advantages. From an attention-grabbing and narrative-shaping point of view, catastrophic risks are an appealing topic for journalists to write about. The reporting on the AI Summit, in both the UK and abroad, followed this pattern and largely repeated the frontier risks emphasis. Even critical coverage tended to reproduce the “unstoppable technology” narrative (a phenomenon Lee Vinsel has called ‘criti-hype’), rather than ask what was missing from the conversation. 

In addition, the connections of such organizations with powerful technology players brings large amounts of influence and money. At its most cynical, we can see such interventions as attempts to shore up market power, capture regulation, and build hype around products. It is worth remembering that last year technology companies spent €113m lobbying the EU, compared to the roughly €2m annually the European AI & Society Fund – one of the main philanthropic funds for EU-based civil society organizations working on AI – spends funding advocacy around human rights, democracy, and civic society. This is a stark example of power imbalance in monetary terms. But even without this level of cynicism, it is to be expected that well-meaning technologists would be more interested in focusing on the latest technologies rather than duller – but more immediate – harms. 

Engaging with real and present risks

Governments, should of, course, pay attention to discussions around national security risks, even if speculative. But governments can provide a valuable counterweight to interests already represented by wealth and power. The UK did not need to throw even more weight behind the frontier risks narrative but instead could have used the Summit to contribute to cutting-edge thinking on safe and ethical AI more immediately.

Outside the UK, there are examples of approaches to governing AI which engage with real and present risks. The Summit, and its outcomes, could be a useful bolster to these efforts. For example, the EU is currently finalizing its AI Act. This Act takes a pragmatic approach on existing product law and pays heed to immediate harms and human rights – though we argue there are still loopholes and weaknesses which need addressing in the (probably final) negotiations over the next few months. The Act classifies AI systems by use cases: a small range of prohibited “unacceptable risk” use cases (such as social scoring systems); a much larger range of “limited” or “minimal” risk use cases, which are not subject to new regulations (apart from some transparency requirements to limit deception); and a middle tier of “high-risk” applications, such as transport or welfare, which must demonstrate adherence to particular standards depending on their intended use.

The outcomes of the Summit could support the important task of assessing potential “high risk” AI systems against standards – in particular, the agreement from AI companies to allow governments a role in testing models before deployment. This even could strengthen against a concern many have around the AI Act, that companies may be able to self-assess in non-transparent ways. However, an emphasis on testing AI for “safety” against frontier risks distracts from the broader benefits of such audits – demonstrating that an end-to-end process meets strong ethical and sustainability, accounting for aspects such as environmental and social impacts, fair use of human labor, data collection practices, and so on. New AI Safety Institutes announced by the UK, the US, and others should ensure their staffing and processes account for these issues, and collaborate to ensure their findings can effectively support efforts like the AI Act.

Empowering the powerless instead of pushing narratives

As well as supporting other actors, the UK will want ways to distinguish itself (particularly given Brexit) from the EU. On the international stage the UK often extols its strong research sector, and pragmatic, flexible, and innovative approaches to governance. These characteristics could have underpinned a different emphasis for the Summit, and the outcomes from it. The new National AI Safety Institutes could be tasked with researching and innovating ways of incorporating democracy and citizens' voices into the development of AI. They could act as interdisciplinary centers bringing together emerging ideas from technologists, such as Collective Constitutional AI, with innovations in public engagement such as Citizens' Assemblies to ensure these ideas are directed democratically and for the public good. The US is experimenting with ways to combine government guidance and venture capital towards building “responsible AI.” But a national institute like an AI Safety Institute could have greater abilities to build more inclusive coalitions beyond private companies, and more ambitious goals than guidelines and voluntary commitments. Research projects into the “State of AI” – another outcome from the Summit – could locate and direct attention towards AI projects which genuinely empower vulnerable groups, and methods for training models which do not unjustly exploit people and resources. In the “What Works” evaluation tradition of UK policy-making, progress on these metrics could be a measure of success for innovations in “AI Safety.”

The Summit has concluded, but these – and other – ideas still offer numerous opportunities for inclusive and equitable innovation, which the UK and partners could champion. Such discussions must be grounded in a recognition that existing harms – from exploitative supply chains in content moderation, to climate impacts of data centers in drought-hit regions, to decisions made about welfare claimants, migrants, and other vulnerable groups – are not novel, and don’t demand innovation. They demand a closer adherence to existing human rights and global equity. There are a lot of powerful interests already pushing narratives towards novel and dramatic frontiers. Governments have a role of empowering the powerless, and that starts by giving them attention.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.