The year we waited for action: 2023 in review

Exactly one year ago, I wrote that automated systems might be regulated for good in 2023. This was too optimistic. Not only did European institutions fail to pass the AI Act, even in its watered-down version; the rise of generative models brought us to a new level of danger.

"Waiting for Godot" from 1963. University of Michigan.
Nicolas Kayser-Bril
Head of Journalism (on parental leave)

In 2023, I waited. Late in 2022, I asked PimEyes, a face recognition service, to delete any pictures it had of me. Giving citizens the right to access and delete personal data is the cornerstone of the General Data Protection Regulation (GDPR). After the company failed to comply, I filed a complaint with Berlin’s data protection authority (DPA). That was 10 January. I received an acknowledgment of receipt two weeks later, telling me that they would keep me updated. I have not heard of them since, and my face is still searchable on PimEyes.

The deletion of my face at PimEyes is not the only event that did not happen in 2023. In late 2022, when an antisemitic billionaire bought Twitter, the European Commission warned him. “In Europe, the bird flies by our rules”, EU Commissioner Thierry Breton famously said in October. Since then, Twitter fell afoul of several of its legal obligations. The company left the code of conduct against disinformation, leading another top EU official to announce that the Commission would engage in a “confrontation”. In October, the Berlaymont sent an official list of questions about content moderation after the platform welcomed falsehoods and propaganda operations during the Israel-Hamas war. A Commission spokesperson told me that Twitter did answer but refused to provide any detail. Meanwhile, misinformation is still rife on the platform, with Twitter (now rebranded “X”) even paying some of the users who produce egregiously false claims. In the confrontation between the Commission and Twitter, the European side is currently not winning. Not only does the Commission fail to enforce the Digital Services Act (DSA), which regulates large online platforms, it keeps providing Twitter with free content. All EU accounts, and all Commissioners are still active on the platform. The Commission even kept advertising on Twitter until late November.

Finally, I’m still waiting for the AI Act. In 2019, the newly installed European Commission vowed to make the regulation of Artificial Intelligence a priority. After years of negotiations and the landmark DSA in 2022, 2023 should have been the year of the AI Act, a piece of legislation that should govern the field for years to come. The regulation was to be finalized by the end of the year. The European Council, Parliament and Commission met on 7 and 9 December and agreed, behind closed doors, on the principles of the text. But the details are yet to be published. And from the information that has been made public, it looks like the regulation will fall far short of what is needed to protect Europeans from the abuses of automated systems. Live, massive face recognition systems in particular will only be allowed in case of terrorist threats or to find missing persons – an understatement likely to mean “everywhere, all the time.”

Credibility hit

To be sure, these three stories do not affect people in the same way. But they all show the limits of the power of the law, whether it is lawmakers who fail to regulate AI, the Commission who fails to confront Twitter or the Berlin DPA who fails to enforce GDPR.

This powerlessness is giving wings to those who do not wish for automated systems to respect fundamental rights, and who disregard the rule of law more generally. Journalists revealed in November that the French national police, and many municipal police forces, used BriefCam, a software for automated video surveillance, with the face recognition feature on. It is absolutely illegal, and could even be criminally prosecuted. The Interior ministry commissioned an internal investigation, but with a twist: the minister asked that the report proves that face recognition was not used. Other police staff in France used Google Bard (a chatbot) to decide on visa claims. Even for people who are not familiar with the GDPR, letting an online, consumer-grade tool make life-changing decisions that require deep knowledge of intricate laws does not sound like something that one should do.

In Germany, the governments of Hesse and Hamburg are pushing ahead with automated video surveillance systems that shall detect “abnormal” situations. We revealed in July that such a system, already in place in Mannheim, sends alerts to the police when people hug in the street, raising concerns over its legality. A ruling in February by Germany’s highest court made clear that the automated treatment of personal data for algorithmic policing was illegal (it had to do with Palantir, a tool to sift through databases, not videosurveillance), but that did not seem to dampen enthusiasm for the technology. The police of Bavaria even kept using similar Palantir software after the February ruling.

Police forces are not the only ones with little concern for the law. In June, journalists showed that thousands of supermarkets in France used automated surveillance illegally. Cases of algorithmic price-fixing, where companies align their prices on their competitors, popped up in Spain and in Austria.

It is, of course, impossible to know how many organizations comply with the law, and how many do not. But these examples, in the context of a lack of funding for enforcement bodies, show that the full might of the law fails to impress.

Truth be gone

This credibility gap is compounded by the most popular AI-related technology of the year: generators of text, images, and videos that look almost exactly like human-made creations but are entirely random. ChatGPT, MidJourney, and their ilk now have hundreds of millions of users. The breakthrough that occurred in 2023 was not related to technology, but to marketing. The easy-to-use interfaces, especially OpenAI’s ChatGPT released in late November 2022, not only let anyone use tools to generate content, they focused the attention of thousands of journalists and politicians, thereby bringing the topic to people who had yet to partake in the AI hype.

Some undoubtedly found ways to use the technology to their benefit without harming others. Politicians used it to draft speeches or laws, judges used it to prepare their rulings and Iranian clerics for their fatwas. Many companies rely on it when they previously hired freelance copywriters or designers.

Taken individually, this usage fits with some long-running ills of our societies. Few would honestly argue that ChatGPT’s attempts at producing legislation are that much worse than the thousands of amendments that are regularly written to paralyze parliamentary work, or that automatically-generated images are worse than bland stock photos. But taken as a whole, the new availability of generative tools is wreaking havoc in the intellectual fabric of society and threaten to destroy any sense of shared reality.

The world’s two leading search engines, Google and Bing, now rely on generative models to display results. Because there is no human oversight, many jokes that went into their training data sets now appear as legitimate results. Users of Google are told that no country in Africa starts with a “K” (an extremely vulgar joke from Reddit), and Bing answers “no” to the question “does Australia exist?” (a pseudo conspiracy theory). Luckily for German users, both Google and Bing still display Bielefeld.

As large language models are opaque to their creators, such outputs (which software vendors call “hallucinations”) are not easy to correct. Google still tells users that Kenya does not start with a “K”, close to six months after the error was revealed. The inherent bullshitting nature of generative tools makes them the largest ever providers of misinformation. A study by AlgorithmWatch in October in the run up to the elections in Hesse, Bavaria, and Switzerland showed that Bing provided many wrong answers to straightforward questions about the candidates and poll numbers.

No escape

Even if these chatbots are disdained by users (Bing’s market share dropped in 2023 despite Microsoft’s gargantuan investments in generative models), we cannot escape the spread of generative models. Thousands of books have already been automatically written. Formerly reliable news operations now publish automatically-generated content with little to no oversight. And there is no effective tool to detect machine-generated text.

It goes beyond text. Google advertised in October a new feature it their phones, the “magic editor”, which lets anyone doctor pictures in seconds. iPhones use other automated systems to alter pictures. Once, a person snapped herself in a mirror. On the picture, the reflection showed a different pose. “The fabric of reality crumbled”, she said. More worryingly, Adobe sold automatically-generated pictures of the Israel-Hamas to media outlets without clear warnings. Generators of audio and video clips (“deepfakes”) are easier than ever to use. Their main victims remain women. Schoolboys can now send pictures of their classmates and obtain realistic nudes. This caused uproars in Spain and New Jersey but probably happens everywhere.

Generative AI was also used in elections. In Poland and Argentina, political parties used automatically-generated pictures and videos to create posters and fake videos of opponents. In Slovakia, a malicious actor published a fake sound clip of a politician hours before the vote.

None of these trends are new. AI was introduced to improve pictures in smartphones around 2016. Deepfakes have been used against women since their inception in the late 2010s. Journalists who followed conspiracy theories already alerted that society “started segmenting into alternate realities” in 2015. The past year brought us to a new level.

Whatever the intent behind the use of generative models, the sheer volume of content produced, coupled with the impossibility to easily ascertain its origin, pushes more and more citizens to assume that any content is fake. This, in turns, automatically favors political movements that disregard factual reality and reject scientific findings. As Hannah Arendt wrote over 70 years ago, “the ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction … no longer exist“.

Civil society is not waiting

This is not inevitable. Our world need not become a hybrid where reality merges with automatically-produced visions to the point where both become indistinguishable. Researchers are already busy investigating the effects of large platforms on society, with the new right of access offered by the DSA. It is too early to say whether Meta, TikTok, and others will share authentic and useful data as they ought to, but so far, they seem to comply. Even Twitter/X set up a form where academics can request data access.

The rights offered by the GDPR could bear fruit, too. The European Court of Justice ruled in December that the credit score used by Schufa, Germany’s largest credit bureau, constituted profiling. Any decision based solely on a score was automated decision-making, the judges wrote, and were hence forbidden unless national legislators carved exemption to the general rule. The opacity of credit scores will not be lifted immediately, but the ruling offers new avenues for citizens and civil society organization to demand transparency and explanations.

In other areas, civil society honed its methods and strategies to weight against the worst abuses brought about by automated decision-making. After journalists revealed that the French police used a tool for automated video surveillance, several organizations filed lawsuits. In one case, they won. The city of Deauville was ordered to cease usage of the tool.

In European journalism, algorithmic accountability reporting made a significant leap in quality and depth thanks to the efforts of Lighthouse Reports, an investigative outlet in the Netherlands. They filed numerous freedom of information requests throughout Europe and obtained details about several welfare management algorithms in Rotterdam and in France. Importantly, they worked with news operations in affected countries, ensuring widespread coverage (and, in Rotterdam, the suspension of the discriminatory algorithm).

The way forward

These successes are small compared to the energy deployed to build and deploy automated systems in Europe. Our most significant setback probably occurred on 13 September, when the president of the European Commission said in her state of the Union address that “mitigating the risk of extinction from AI should be a global priority”. The sentence may be innocuous in itself. However, it reveals that the ideology that considers AI might become godlike has permeated the highest echelons of political power. One of the problems of this ideology is that it disregards the current, actual risks and failures of automated systems, making it much harder to find solutions and help people affected.

How do we go forward in such an environment? The chances that algorithmic systems will strengthen justice, democracy, and sustainability (AlgorithmWatch’s mission) in 2024 are slim. I do not expect PimEyes to remove the pictures it has of me. And I do not expect top politicians to stop using a social network where algorithms automatically favor far-right extremists.

But, as a Czechoslovak playwright once wrote in 1978 and in arguably more difficult circumstances, the fact that people exist and work on seemingly hopeless tasks is “in itself immensely important and worthwhile”. This playwright was Václav Havel, and he became president eleven years later.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.