EU rules for AI have some distance to go

The AI Act and Directive on AI Liability aim to protect fundamental rights, health and safety, but fall short in the current form. An op-ed, published first at Context.

Photo by Chris Henry on Unsplash

Nikolett Aszódi
Policy & Advocacy Manager
Angela Müller
Angela Müller
Head of Policy & Advocacy | Executive Director AlgorithmWatch CH

“Tell me your place of birth and I’ll tell you how much your insurance will cost,” could be the motto of Italian car insurance companies that charge different prices based on where customers were born, despite other relevant aspects such as driving history.

Udo Enwerenezor, a senior policy advisor for the Italian non-profit Cooperazione Per Lo Sviluppo Dei Paesi Emergenti, first shed light on this practice in 2002, saying he was quoted two different prices based on the nationality he gave. Enwerenezor is Italian-Nigerian and has dual citizenship. His revelations were recently underscored by a study which looked into the opaque algorithms car insurance companies use. It found that a driver born in Ghana may be charged over 1,000 euros more than a driver born in Milan, with an otherwise identical profile.

Algorithmic decision-making today pervades most areas of our lives, without us even noticing that a non-human system was involved, let alone comprehending how it worked, or its potential unjust effects.

With the upcoming regulation on Artificial Intelligence with the so-called draft AI Act, the EU is now trying to protect fundamental rights, health and safety against risks of harm stemming from AI. To do so, it chooses the approach of harmonising the single market. Whether this can be an effective regulatory approach when the objective is to protect fundamental rights remains to be seen.

Above all, the Act’s market harmonisation approach tends to look at AI risks through the prism of product safety. So it focuses primarily on the systems that are put on the EU market. However, the risks that come with an algorithmic system heavily depend on the way in which it is being used. Not only do the systems remain a black box, but we can also only speculate by whom and for what purpose they are actually deployed.

Regrettably, the AI Act in its current form only sets an obligation for developers to register the high-risk systems in a publicly accessible EU-wide database. With that we would know which high-risk systems exist on the EU market. However, it is crucial to also know the context in which these systems are deployed.

Without the obligation on deployers to register their uses of high-risk systems in the database, we will not have a meaningful transparency regime. Without it, we will not gain more systematic knowledge about the impact that the actual use of AI systems has on us and our society. Without it, we will not be able to exercise public scrutiny over the impact.

From a fundamental rights perspective, it seems particularly worrisome that the draft AI Act pursues the goal of protecting fundamental rights while the holders of these rights – i.e. those affected by AI systems – go unmentioned. In its current form, the Act does not contain any provision on whose basis Enwerenezor, for example, could request an explanation from the car insurance firm on how his premium came about. The Act would also not provide him with access to effective remedies once he finds out he was unjustifiably discriminated against. And - based on the Act - he cannot bring forward a claim in case he found out the system is not even compliant with the Act. These shortcomings need to be corrected.

That said, the AI Act is just one legislative puzzle piece among many that comprise EU’s digital policy. For example, the purpose of the upcoming AI Liability Directive is to ensure that civil law provides those affected by an AI system with a baseline compensation for the harm they suffer. However, accountability gaps will likely remain.

For instance, the car insurance company that charges different prices to customers with identical driving histories but different birthplaces would likely not be liable under the Directive if it could demonstrate that it purchased the AI system from a recognised company and has met all obligations with regard to proper deployment. Thus, if these requirements are fulfilled but the system still has harmful effects, victims may not be entitled to compensation.

If the AI Act and its sister Directive on AI Liability should become effective tools to protect people’s fundamental rights against risks stemming from AI, then EU negotiators have some homework to do. We should have a realistic expectation of what these laws are providing people like Enwerenezor, and where other governance frameworks need to come in.

Read more on our policy & advocacy work on the Artificial Intelligence Act