Fighting in the dark: How Europeans push back against rogue AI

Automated systems go astray, they contribute to child sexual violence, deny people their social benefits, or block organizations' online presence. The affected people often feel helpless when their rights are violated, but some are taking up the fight while current laws fail to protect the victims.

An illustration of the protagonists of the story. Top: Camille Lextray, Jerzy Afanasjew. Bottom: Anastasia Dedyukhina, Miriam Al Adib, Soizic Pénicaud.
Léo Verrier

Nicolas Kayser-Bril
Reporter

Miriam Al Adib is a Spanish gynecologist who lives in Almendralejo. One day last September, she found her teenager daughter in distress. Naked pictures of her and other classmates were circulating in her school’s WhatsApp groups. The pictures were fake, she said. Schoolboys had generated them, using an AI tool.

Al Adib felt her daughter’s anxiety but knew immediately what to do. She posted a video on Instagram in which she reported what happened and asked victims to come forward. A few hours later, many other mothers had texted her, sharing their daughters' experiences. They all went to the police, and the boys stopped creating fake nudes.

Al Adib had been working to raise awareness on child sexual violence for more than a decade, both online and offline. She knows how to react to such attacks. Still, she said, she felt neglected by the institutions that are supposed to prevent such cruel use of AI. Today, as the uproar calms down in Almendralejo, the apps that let boys create fake nudes remain available – and are probably in use – throughout Europe’s schools.

We talked to people who decided to act against AI-caused injustices. They all explained how difficult it was to redress the torts and make an impact. The rights enshrined in European legislation are rarely enforceable for individuals.

“The algorithm’s fault”

When Soizic Pénicaud started working with the algorithms of the French welfare management system, she organized trainings for people in close contact with beneficiaries. Once, such a case worker told her that the only advice she gave people who had trouble with the request handling system was “there’s nothing you can do, it’s the algorithm’s fault.” Pénicaud, a digital rights researcher, was taken aback. Not only did she know that algorithmic systems could be held accountable, she knew how to do it.

Together with several civil society organizations, she filed freedom of information requests with the welfare agency. In November 2023, she finally obtained the details of the algorithm. Statistical analyses showed that the system had many faults. In particular, it considers people living with a disability, single mothers, and poorer people in general a potential fraud risk, leading in many cases to the automated suspension of benefits and the creation of “robo-debt.”

Pénicaud’s victory was bittersweet. Even if it proved that welfare controls were discriminatory, as many suspected, regulators and politicians have yet to weigh in on what to make of that algorithm now. At a deeper level, this French story was in many ways repeating several scandals by which welfare management services were shook in the Netherlands and Poland. There too, it happened that poorly-designed systems and an ideology against people considered as undeserving of society’s solidarity led to thousands losing their income owed to them by law. And there too, the problem was revealed thanks to the efforts of researchers and activists.

Public apology

Almost 2,000 kilometers to the east of Paris, another automated system wrecked havoc in the professional life of Jerzy Afanasjew, a Polish researcher. He was a member of the board of the Social Initiative for Drug Policy (SIN in its Polish acronym), a non-profit promoting harms-reduction among the youth. One day in 2018, SIN noticed that some Facebook posts had been summarily removed from their official page. A few weeks later, it was a whole page, then an Instagram account. According to Facebook, the content had infringed their “community standards,” which prohibit the sale of illegal psychoactive substances. But the fact is, SIN does not sell drugs, it informs about them.

That was a serious blow for Afanasjew. Given Facebook’s (now Meta) quasi monopoly, SIN lost their main channel of communication and with that their target audience. Within months, the number of questions they received from Poles trying to lower their risk of infection or overdose dropped from several a day to a trickle. This was all the more infuriating as people leveraging Meta’s products to actually sell drugs fear little from the giant’s algorithms. They simply use thinly-veiled code words, such as “green socks” for marijuana, to avoid automated detection.

But Afanasjew was undeterred. Together with a digital rights' organization, he sued Facebook. Not under contract law (a company has the right to decide on its “community guidelines,” after all) but under a legal specificity in Poland’s civil code, which protects the reputation of organizations. By single-handedly removing SIN’s online presence, Facebook tainted their good name, Afanasjew argues. He demands a public apology. The final ruling is expected in March 2024.

No place to go

Europeans who try to protect their children, or are unfairly targeted by a welfare agency’s fraud detection algorithm or by a monopolistic social network’s automated henchmen, have few options available to fight back, and almost none if they are on their own. Al Adib, Pénicaud and Afanasjew did not act alone. They were accompanied by specialist civil society organizations who offered legal advice and logistical support.

Most of the persons we talked to said that they first filed their complaints through official channels, in vain. Afanasjew and the SIN team did click on Facebook’s button to contest the automated decision. Never were they able to talk to humans to explain that harms-reduction was not the same as selling drugs.

Even public authorities rarely help. A French engineer discovered that software used in recruiting could automatically share blocklists – which could lead to some people never having their applications reviewed by a human. He tried for years to alert the data protection authority and the equality body. He received an acknowledgment of receipt, nothing more.

In some cases, the authorities do answer, but late. In 2021, Anastasia Dedyukhina alerted the Spanish data protection authority about the Mobile World Congress, a conference in Barcelona with about 20,000 attendees, using face recognition illegally. It took two years, but the authority finally fined the organizers €200,000.

Although Dedyukhina is a seasoned public speaker and knowledgeable about privacy laws, she still had to rely on the help of a friend, who specialized in AI regulation, to file her complaint. “I had some sense of how the General Data Protection Regulation works. I know how to ask about what automated decisions have been taken about you,” she says. But she felt she needed the support of a legal professional to exercise her rights.

Fear of retaliation

The lack of easy avenues to file complaints against automated systems is not the only obstacle Europeans face. State-run systems are as powerful as unavoidable, monopolistic private companies often are too. Many fear that taking any action will lead to retaliation.

Beneficiaries of the French welfare systems can legally request the details of the algorithms they are subjected to. But several we talked to said that they would not make use of this right, lest it leads to additional scrutiny from the authorities involved. Such fears might not be without reason because the legal requests are processed by the people who oversee the benefits' disbursement.

At SIN in Poland, Afanasjew said that very few organizations expressed support for their case, especially not on Facebook. The nature of their activity and the political climate in Poland at the time (the country was ruled by a Law and Justice right-wing party until late 2023) may explain some of this silence. But Afanasjew is convinced that few are willing to risk angering Facebook, as they don’t want to be targeted in turn.

When institutions don't provide the means to assert their legal rights, demanding these rights be respected takes courage. “When people accept the status quo, they allow their rights to be taken away,” Dedyukhina says.

Tech frames

These power imbalances contribute to the fact that there are very few ways for people to file complaints about automated systems. Besides of the lack of places to go and the fear of retaliation, a third, less visible difficulty blocks their path: the technological nature of AI blinds many to its social consequences.

In France, many of the persons Pénicaud initially met told her: “it’s about tech – I don’t understand.” She senses that people who are not technologists themselves don't feel legitimized to scrutinize algorithmic systems. This is also what Pauline Gourlet experienced, a researcher at Sciences-Po’s Medialab. She works in the Shaping AI project and specializes on the discourses surrounding AI. Simply talking about “bias” in a system, she remembers, immediately was transformed into a technical issue which made any non-technical person look out of depth.

Valérie Pras, who worked with Pénicaud, says that she spends a lot of time explaining that an algorithm can be inspected, if broken down into small parts. “We dissect everything, to be able to explain at what level the computer system intervenes,” she says. This painstaking work is the first step required to overcome the helplessness so many people feel when faced with automated systems.

And it’s not just the software. Maud Barret, a PhD student and a colleague of Gourlet’s, adds that the social and political forces at play are at least as important as the technology. “When you look at a specific problem [such as fraud detection in the welfare system], you realize that the question is not technological per se but about changing how social services are provided,” she said.

Against the hype

Such voices are seldom heard in the noise of news about billions being bet on the next AI champions. The executives at AI start-ups prefer to entertain a techno-utopian discourse in which anthropomorphized software is often omnipotent, if not godlike. The injustices fought by Pénicaud, Afanasjew, and others, on the other hand, require the deconstruction of algorithmic systems in technical and sociological components. One has to look at the details to precisely identify where things go wrong and to find specific legal or political means to intervene. But in many people’s minds, the idea of a godlike entity doesn’t allow for a nuanced review.

Despite countless Europeans' courage and persistence to take on rogue AIs, their efforts barely make a dent in the communication plans of billion-euro tech companies. In September, the European Commission’s president spoke of the “risk of extinction from AI,” a talking point boosted by the industry with no basis in fact. Many scholars argue that this long-term risk scenario mostly serves to distract from the actual risks of AI. Already today, these risks have caused damage and derailed the lives of thousands, if not millions, by putting them in debt, making their online presence disappear, or subjecting them to a tight net of biometric surveillance.

But the fights of Al Adib, Afanasjew, and others could spill over. “I had lots of people approaching me saying they wanted to do the same [complaint about the unwarranted use of face recognition],” Dedyukhina says. “When you open this little door, you’re actually giving permission for the rest to do the same.”

Naiara Bellio (she/her)

Reporter

Naiara Bellio covers the topics privacy, automated decision-making systems, and digital rights. Before she joined AlgorithmWatch, she coordinated the technology section of the Maldita.es foundation, addressing disinformation related to people's digital lives and leading international research on surveillance and data protection. She also worked for Agencia EFE in Madrid and Argentina and for elDiario.es. She collaborated with organizations such as Fair Trials and AlgoRace in researching the use of algorithmic systems by administrations.

Mathilde Saliou

Former Fellow Algorithmic Accountability Reporting

Mathilde is a French journalist specialized in digital issues. A graduate of Sciences Po Paris, she has worked for The Guardian, RFI, 20 Minutes, Les Inrocks, Next INpact, and others. In 2023, she published the book Technoféminisme, comment le numérique aggrave les inégalités ("Technofeminism, how digital technology is exacerbating inequalities") with the French publishing house Grasset.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.