A Year of Challenging Choices – 2024 in Review
2024 was a "super election" year and it marked the rise of generative Artificial Intelligence. With the adoption of the AI Act, it seemed poised to be the moment we finally gained control over automated systems. Yet, that certainty still feels out of reach.
2024 was a super election year. Voting took place in more than 60 countries and in the European Union, which held elections for the European Parliament. Generative models also found their way into this “democracy party” – as elections are ironically referred to in Spain. Their influence was not always a positive one.
Shortly before the EU elections, we tested the image generators MidJourney, DALL·E, and Stable Diffusion. It turned out, for example, that they generally portray female politicians as young, white, and slim. In the run-up to German state elections, we examined the text generators Gemini, GPT 3.5, and Copilot. The free GPT 3.5 model from Open AI provided incorrect information in around 30 percent of cases. Copilot's answers were considerably more accurate than the other models, with only 5% of the answers showing clear falsehoods and with sources frequently included as weblinks. The results of our research, when sent to Microsoft drastically improved how often the model recognized election-related questions and deployed its safeguards – from 35% up to 80%. However, the other companies did not engage. OpenAI even tried to sell us ChatGPT Enterprise instead.
We even took to the streets at a demonstration to make a stand against Remote Biometric Identification (RBI). This sensitive issue became even more problematic with the 2024 Olympic Games. The French government practically turned Paris into a surveillance lab. Image recognition algorithms tracked people's movements throughout the city. Nevertheless, they did not deliver any useful results. In October, the German government presented a “security package,” a set of measures including face recognition software scanning public spaces and basically the entire internet. The government acted as if this was a panacea against crime. In fact, the use of this technology is an attack on our civil liberties.
Defining high risks
The AI Act was finally adopted, though not with all the guarantees we had been hoping for. When finalizing the law, the decision-makers shirked their responsibility by only imposing meaningful requirements for real-time biometric identification. The bar set by the AI Act for allowing post remote biometric identification in public spaces (when the identification happens sometime after the recording) on the other hand is very low. The suspicion of any crime can justify such a system be used. The Dutch and Spanish police, for example, already analyzed surveillance camera recordings with biometric identification software. Since the AI Act is in force, other European police authorities (such as the Swedish) intent to make use of the new powers. However, the new technological possibilities repeatedly lead to unlawful arrests.
Automated systems which are not based on AI are mostly not considered highly risky and therefore hardly have to meet any requirements. However, they continue to wreak havoc. The Swedish Social Insurance Agency, for example, scores temporary child support applicants with AI-based software that constantly flags vulnerable groups as potential fraud suspects. In Denmark, an automated system used by the social services agency targets migrants and refugees as such. Thousands of pensioners in India were deprived of their pensions because an algorithm incorrectly listed them as "dead" in government records.
But civil society is resilient. A French digital rights coalition is challenging the algorithms used by the government to detect miscalculated welfare payments, which were found to discriminate against single mothers and the disabled. If successful, it could become one of the first cases where such a data-driven system is deactivated under the AI Act.
In the name of safety
While Europe is trying to get a grip on high-risk AI, more than 40,000 people have been killed in the Gaza Strip. The Israeli army is using a system called “The Gospel” to target and destroy buildings. Another system called “Lavender” is designed to search for Hamas activists, while the “Where's Daddy” system automatically tracks the movements of targeted people in their homes via cell phone signals.
Big Tech has embraced a profit-driven approach to warfare. Despite public declarations against contributing to AI warfare models, Meta and Palantir have sold their technologies to defense departments, particularly in the United States. Major cloud providers have followed suit.
AI is also gaining ground at the EU's external border. An in-depth research by AlgorithmWatch has revealed to what extent the European Union remains committed to techno-solutionist strategies for border management. Among the myriad of dystopian border security research projects proposed over the past decade – ranging from sentiment analysis to “predict” migration flows to virtual reality tools designed to influence border agents’ spatial assessments – only one was rejected by the EU for failing to meet ethical requirements.
As the EU intensifies its reliance on AI to control “irregular” migration, the Entry/Exit System (EES), a biometric categorization initiative, has been delayed for the third time. Member states feared that the simultaneous implementation of the systems could lead to logistical bottlenecks. In the U.S., the Department of Homeland Security plans to use face recognition to monitor migrant children. In Canada, authorities want to integrate a similar technology into an app designed to track individuals who are facing deportation. In the UK, an algorithm called “IPIC” (Identify and Prioritise Immigration Cases) is being used to recommend enforcement actions based on migrants' sensitive data.
Crushing nature and workers
Generative AI has become an integral part of our lives. That's why the public needs to know about the impact the technology is having on the climate crisis. Google and Microsoft have admitted that their greenhouse gas emissions have increased by 50% over the last five years. The reason for this is the development of generative AI. Research has shown that big tech companies may have underestimated their data centers‘ carbon emissions by several hundred percent.
AI-based systems cannot do without data centers, but neither can they do without human (and precarious) labor. Workers in Kenya who make data machine-readable are being paid with soft drinks and junk food. Finnish prisoners have to tag content for AI training. The performance and autonomy of AI that is hailed vigorously now is no longer so impressive if looking behind the surface. Self-driving cars only work because someone is doing click work in a stuffy room in Nairobi. By accepting that platform operators train their models with our user data, we support this exploitation system, oftentimes without knowing it.
For years we have been wondering about the extent to which Big Tech is entangled with politics, as CEOs try to shape society to their ends. This time the answer was pretty obvious. Elon Musk's platform X favored Donald Trump and Musk himself injected millions into the Trump campaign.
Such alliances can serve harmful purposes. Our goal is to create a counterbalance by fostering positive relationships and seizing new opportunities to push back. The adoption of the AI Act and new data access to fight systemic risks under the Digital Services Act open up a whole new range of possibilities for action. While 2024 may have been a year of challenging choices by voters and decision-makers, 2025 could be the year we build the resistance.
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!