New report highlights the risks of AI on fundamental rights

The European watchdog for fundamental rights published a report on Artificial Intelligence. AlgorithmWatch welcomes some of the recommendations, and encourages a bolder approach.

The European Union Agency for Fundamental Rights (FRA), which supports European institutions and members states on related issues, released a report on Artificial Intelligence on 14 December. With this report, FRA offers good advice to European institutions and member states. In slightly over 100 pages, the agency provides a compendium of existing and desirable legislation on the topic.

Outsourcing expertise

Research for the report was outsourced to Ecorys, a private consultancy, for a lofty 240,000€. Ecorys interviewed 101 experts, most of whom have direct experience with the deployment of automated systems in France, Spain, Estonia, the Netherlands and Finland. Unfortunately for readers, neither the names of the experts nor the projects they work on were made public.

It is laudable that FRA continues to address the issue, following their 2018 report on Big Data. However, contracting out the crucial task of information gathering dilutes FRA’s expertise, which is supposed to be its core mission. More concretely, it leads to misunderstandings. The report considers for instance that an unnamed system which can only be VioGén, a system deployed by the Spanish national police to assess the dangerousness of male partners, uses Artificial Intelligence (p. 36). In fact, as our investigation showed, it is a deterministic, points-based questionnaire.

No idea what they’re doing

The interviews with practitioners reveal that very few understand how the systems they operate work (p. 66). In particular, almost all argue that a system cannot discriminate against people using protected criteria (such as gender, race, religion or age) as long as the variable is not explicitly part of the data a system has access to (p. 71). This belief is mistaken. Machine Learning tools have a proven record of discriminating against protected groups using proxy variables. Amazon offered an infamous case study in 2018 when it was revealed that one of their systems that automatically screened resumes systematically downgraded women, even though it did not take gender into account.

Respondents all understood that privacy was a key part of any autonomous systems, but were very unsure of what this meant in practice. They all knew that the GDPR existed, to the point that it eclipsed other pieces of legislation in their minds (p. 62), but most were at a loss when it came to actually implementing its provisions and emphasized that the text lacked clarity (p. 66).

Focus on the private sector

Based on these findings, FRA published six “opinions”, or recommendations. In opinion 1 (p. 7), FRA suggests that “relevant safeguards need to be provided for by law to effectively protect against arbitrary interference with fundamental rights and to give legal certainty to both AI developers and users”, which is a welcome departure from the many calls for industry to self-regulate.

But FRA also recommends that the “EU and Member States should consider targeted actions to support those developing, using or planning to use AI systems, to ensure effective compliance with their fundamental rights impact assessment obligations”, which could include funding, guidelines, training or awareness raising. Increasing expertise in this field is highly relevant and badly needed, but the suggestion these actions should particularly target the private sector (opinion 2, p. 8) is puzzling.

The use of automated systems by the public sector, especially in welfare management, can have direct and significant consequences on people’s rights, which is well documented in the report itself. In addition, whereas citizens have a choice of service providers in most private sector domains, this choice does not exist for public administration. Giving preference to capacity-building for private companies over the public sector is therefore hard to justify.

Limited transparency

Along the same lines, the recommendation that the “EU legislator and Member States should ensure effective access to justice for individuals in cases involving AI-based decisions” (opinion 6, p. 13) is worth stressing. Laws in many areas need to be adapted to new challenges posed by automated decision-making or decision support systems.

A crucial basis for further assessment and appropriate regulatory action is a much higher level of transparency concerning the use of such systems. As a consequence, FRA recommends that in order to “ensure that available remedies are accessible in practice, the EU legislator and Member States could consider introducing a legal duty for public administration and private companies using AI systems to provide those seeking redress information about the operation of their AI systems. This includes information on how these AI systems arrive at automated decisions. This obligation would help achieve equality of arms in cases of individuals seeking justice. It would also support the effectiveness of external monitoring and human rights oversight of AI systems“ (opinion 6).

While this is necessary, it begs the question: why only offer this level of transparency to those seeking redress, when harm has already been done? It would have been both powerful and coherent to ask that mandatory registers be established that make this information publicly accessible, ideally including the results of assessments about the systems’ impact. This would provide oversight institutions, researchers, journalists and civil society a much better opportunity to scrutinize the use of systems and their consequences, as we have argued in our position on the European Commission’s consultation on AI.

The elephant in the room

Throughout the report, FRA only hints at the elephant in the room: enforcement. In a lone sentence on page 95, the authors admits that data protection authorities, which are responsible for the enforcement of GDPR, are overstretched and cannot cope with their workload, implying that the same could happen with improved legislation on AI.

Fundamental rights will only be protected if relevant legal provisions can be enforced, as AlgorithmWatch pointed out in its analysis of the Digital Services Act. This decision, however, falls outside of FRA’s remit. It is up to the Member States to provide enforcement institutions with the resources they need to do their job effectively.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.