The year automated systems might have been regulated: 2022 in review

Automated systems were surprisingly absent from this year’s major stories. On the regulation front, European institutions stepped up their efforts. How much change Europeans can expect depends on the institutions’ resolve, and the first test of 2023 already began.

Nicolas Kayser-Bril
Reporter

In 2022, just like in the previous decade and in the decades to come, the most important developing story was the breakdown of our climate. The prolonged absence of rainfall is upending livelihoods in East Africa, while crops in West Africa were destroyed by torrential downpour. Hundreds of millions suffered from hunger this year, and the situation is getting worse.

That this development has nothing to do with automated systems could be surprising. In early 2019, for instance, Microsoft announced that “using AI [could] solve world hunger and malnutrition [by 2030].” The claim might have been overblown.

Overall, as I assessed in 2020, Artificial Intelligence’s contribution to the climate crisis was acceleration, not mitigation. In the regulatory filings of oil and gas corporations, which are major sources of greenhouse gas emissions, mentions of AI did not increase significantly compared to 2021, but remain at an all-time high.

Artillery vs. computer vision

The other major story of 2022 was, of course, Russia’s war of aggression against Ukraine. Here, too, automated systems do not seem to play an outsized role. Although thousands of academics and celebrities warned in 2016 against automated lethal weapons, which were supposed to be developed “within years,” there is still no evidence that a robot killer used Machine Learning to target and attack humans.

Documented deployment of automated systems in Ukraine marginally improve existing processes rather than revolutionize warfare. Police use face recognition to identify casualties and potential saboteurs, for instance. Staff from a voice-cloning company began a project of acoustic detection of missiles. And it is quite possible, though unsubstantiated, that the drones used in the daring raids on Russian military bases made use of computer vision software to operate autonomously.

(Not everyone agrees. The head of Palantir, a software that helps make sense of large amounts of data, offered his tool to the Ukrainian army. He says that “the power of advanced algorithmic warfare systems is now so great that it equates to having tactical nuclear weapons,” but provides no evidence).

But overall, what surprised observers was not the use of advanced technology on the frontline. It was the major role of artillery, which was – before the war – thought by many to be a relic of the 20th century.

AI winter

Even in the cozy offices of Silicon Valley, Artificial Intelligence might be losing steam, with some AI teams disbanded or summarily fired. True, the year was rich with breakthroughs, with computers generating videos from prompts, bullshitting like real con-men, beating humans at Stratego and Diplomacy or predicting protein structure. But it ended with mass layoffs. More than 200,000 people working in tech could have been fired since November, with Amazon and Meta leading the charge.

Whether or not a new “AI winter” sets in is unlikely to make much of a difference in the lives of Europeans. Much like autonomous weapons still have not revolutionized warfare, shiny new tools rarely transform society. GPT-3 for instance, a software that produces text that is, at first glance, not different from human prose, was touted as a game changer when it was released in 2020. Two-and-a-half years later, I was not able to find any documented use of the tool outside of niche apps (GPT-3’s biggest customer seems to be GitHub’s Copilot, a tool that suggests code snippets to programmers).

Eroding trust

Far from being transformational, automated systems of various technological sophistication slowly creep in institutions and, often, erode their foundations. In 2022 as in previous years, automated systems were introduced or expanded under the guise of improving efficiency but ended up cutting benefits and decreasing the quality of public services.

In France, the national welfare management system keeps scoring all beneficiaries, every month, to assess their “risk”. Many social workers privately disagree with this approach and quit, paving the way for the privatization of the system.

In Serbia, a centralized database of welfare services was introduced in March. It uses 130 variables, including ethnicity, to assess the rights of beneficiaries. In less than a year, it led to 22,000 people losing their subsidy, typically because minor commercial activities (such as selling scrap metal) are now registered and trigger a suspension of unemployment benefits. Roma people are most affected.

In the Netherlands, the government plans to introduce a new data collection system to automatically assess the “performance” of mental health care. “Core values in healthcare such as professional autonomy and a confidential treatment relationship are being eroded by technocratic wishful thinking,” the Civil Rights Platform, a non-profit, wrote about the project.

Fighting back

As in previous years, civil society organizations have fought back against this trend. In Switzerland, several cities decided to ban face recognition in public spaces following pressure from organizations including AlgorithmWatch CH.

In the Netherlands, a student initiated legal proceedings against VU Amsterdam, a university, claiming that the automated anti-cheating software they use penalized her. It uses face recognition to verify a student’s identity and often does not recognize people with darker skin tones. The Dutch Institute for Human Rights, the country’s equality body, ruled in her favor.

In France, students from the Paris 8 university went to court over another automated anti-cheating software. The judge ordered that the university suspend its usage of the tool.

The list of small victories is much longer. But the most impactful instruments in the fight against the disproportionate and secret use of automated systems in 2022 might have been drawn by European institutions.

European law

The final version of the Digital Services Act (DSA) was published in October, and some of its provisions entered into force in November. The most important ones, such as the right of access to platform data for researchers, will be applicable from February 2024 on. The AI Act (AIA), first proposed in late 2021, was heavily discussed and negotiated this year. It should become law in 2023 or 2024.

Taken together, these texts impose new transparency obligations to online services and operators of automated systems, especially the larger ones. Under the DSA, social networks will have to be more transparent about their moderation decisions and reduce “systemic risks” such as interference in elections. Under the AIA (in its current form), “high risk” automated systems would have to be listed in a transparency register, and some forms of automation that carry “unacceptable risks” would be simply banned.

Overall, the new laws could bring a new standard of transparency and accountability to operators of automated systems. The key question, of course, is how they will be enforced. The precedent of the data protection regulation, known as GDPR, doesn’t offer much hope.

GDPR precedent

The GDPR was voted on in 2016 and entered into force in May 2018, four-and-a-half years ago. It prohibits the collection and use of large amounts of personal data, and forbids entirely automated systems that have “legal consequences” on people. It gives national data protection authorities (DPA) sweeping powers to enforce the text.

In 2022, DPAs did make use of these powers. Clearview AI, a company that scrapes pictures off the internet and provides face recognition services to the police and other state agencies, was fined by the French, Greek and Italian DPAs. Each imposed the maximum fine of 20 million euros. Clearview AI did not ask for consent before scraping the pictures, the DPAs argued.

Despite the hefty sums, Clearview AI, which is based in the United States, ignored the orders and did not pay a cent.

In other areas, too, the GDPR fell short of its promises. Real-time-bidding is an advertising technique whereby companies exchange personal data to decide what ad to show to a user. It has long been argued that the practice was illegal under GDPR, but it continues unabated.

A similar practice was ruled illegal in December. The European Data Protection Board said that Facebook and Instagram could not show personalized ads to users without their explicit consent. It followed a 2018 complaint by Austrian activist Max Schrems.

Despite the news, and although Europe makes roughly a quarter of Meta’s revenue, the company’s share price did not budge (the 6% drop on the day of the announcement was rubbed out within a week). It shows that the men who invest large sums of money do not expect the decision to force Facebook to change its business model. In other words, they do not consider the GDPR to be much of a limitation on a company’s business.

Enforcement push

Enforcement of the DSA and the AIA might be different. European institutions and national governments set up teams of experts to monitor the activity of operators of automated systems. France led the way with PEReN, which grew to around 30 staff in 2022. In early 2023, the Spanish agency for the supervision of Artificial Intelligence will open in La Coruña and the European Centre for Algorithmic Transparency will start in Sevilla. This might give regulators better insights into the inner workings of automated systems, and could allow them to be more proactive in their relationships with the companies that operate them.

But the first test for 2023 will be Twitter. In late 2022, a far-right billionaire bought the platform and quickly fell foul of European regulations. He fired the people in charge of the company’s GDPR obligations days into his tenure. He then changed the moderation rules, resulting in dramatic increases in hate speech, and arbitrarily shut down the accounts of journalists. Commissioner Thierry Breton warned him that Twitter might be in breach of European regulations.

How European institutions deal with “Twitter 2.0” will be a signal to other online services, large and small. If they keep providing the service with free content and engaging with its owner as if he were acting in good faith, others might take the cue that anything is allowed as long—as they pretend to abide by the rules. On the other hand, if European institutions prioritize their own social media channels and show their resolve in enforcing existing legislation, 2022 might be remembered as the year when automated systems began to be effectively regulated.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.