
What Are Green Algorithms Anyway?
European governments are encouraging the development of so-called “green” algorithms and “green” AI. The term is ambiguous: they are not always meant as systems designed to be sustainable.

Spanish, English
bellio@
Naiara Bellio covers the topics privacy, automated decision-making systems, and digital rights. Before she joined AlgorithmWatch, she coordinated the technology section of the Maldita.es foundation, addressing disinformation related to people's digital lives and leading international research on surveillance and data protection. She also worked for Agencia EFE in Madrid and Argentina and for elDiario.es. She collaborated with organizations such as Fair Trials and AlgoRace in researching the use of algorithmic systems by administrations.

European governments are encouraging the development of so-called “green” algorithms and “green” AI. The term is ambiguous: they are not always meant as systems designed to be sustainable.

While public bodies see an urgent need to integrate the use of generative Artificial Intelligence tools in schools' curricula, teachers already notice a sharp decline in performance and creativity.

Banks use automated systems to monitor customers and their transactions to fight against money laundering and financing of terrorism. But sometimes, these systems make mistakes that lead to the blocking or closure of people’s bank accounts. The phenomenon is called de-banking.

Nurses, healthcare professionals, and childcare workers increasingly turn to algorithmically managed apps that connect them with hospitals, clinics, and households in need of temporary staff. Born in an economy shaped by gig work, these platforms replicate many of the same structural problems.

Automated systems go astray, they contribute to child sexual violence, deny people their social benefits, or block organizations' online presence. The affected people often feel helpless when their rights are violated, but some are taking up the fight while current laws fail to protect the victims.

We wanted to collect data on how newspapers wrote about AI. Swayed by the sirens of OpenAI, we thought that using a Large Language Model was the best way to do such a massive text analysis. Alas, we failed. But LLMs are probably not to blame.

Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.

A recent investigation by Tracking Exposed shows that Glovo’s subsidiary in Italy, Foodinho, registers couriers’ off-shift location and shares it with unauthorized parties. The delivery app provider has also been found to have created a “hidden” credit score for their riders.

In a small Spanish town, several schoolboys used generative models to create fake nudes of their fellow pupils. Police, prosecutors, and parents are at a loss on how to pursue a case that shows, once again, that women are the main victims of deepfakes.

In Spain, 200 students of the International University of La Rioja failed their exams. Some blame a glitch in the proctoring software, but it might have been a change in the system’s rules. University officials gave contradictory explanations, leaving students to fight against bureaucracy and the assessment of a machine.

A lack of source material, investment, and commercial prioritization are all holding back the development of generative models and automated moderation for languages spoken in smaller countries and regions.

Police in the Basque Country use an algorithm to predict gender-based violence. The tool's accuracy is unclear, and it leaves a lot of room for the personal opinions of police officers.

Eleven years ago, the Catalonian Department of Justice, in Spain, introduced RisCanvi, a system that estimates the risk that inmates reoffend upon leaving prison. All Catalonian prisons use it, but the tool is hardly transparent.

Veripol is a software that assesses the veracity of complaints filed with the Spanish national police. It was introduced in 2018, but it’s unclear if it works as intended.

Madrid South Station’s face recognition system automatically matches every visitor’s face against a database of suspects, and shares information with the Spanish police.