Dr. Nicolas Kayser-Bril

Head of Journalism

Photo: Julia Bornkessel, CC BY 4.0

Nicolas is a French-German journalist. He pioneered data-driven journalism in Europe, regularly speaks at international conferences, and taught journalism at several journalism schools in France, Switzerland and Russia. As a self-educated developer, he created interactive, data-driven applications for Le Monde. He built the data journalism team at OWNI and co-founded and managed Journalism++ from 2011 to 2017. Nicolas was one of the main authors of the Datajournalism Handbook.

Articles for AlgorithmWatch

How “existential risk” became the AI industry’s most successful strategy

When their key product was faced with unfavorable scientific evidence and the risk of regulation, most businesses in the 20th century defended themselves by sowing doubt on an industrial scale. Big AI is doing something radically different: it floods the zone with potential future risks.

Members of the Armed Forces Parliamentary Scheme taking part in a Wargaming exercise.

Where the army does not use AI

Wargames allow military officers to explore scenarios and teach them how to make decisions as if they were in combat. Although specialists have thought about it, so far they are not rushing to stuff these games with AI.

AI probably does lead to more computer security disasters

Anecdotes abound of people losing data after trusting a chatbot to look after their computer. But does that constitute a trend? And is AI to blame, or or are those who blindly trust chatbots simply the sort of people who would have done something foolish anyway? More research is needed, but there is a strong case to be made that AI is, at least partly, making matters worse.

Datacenter Equinix AM3/AM4 Amsterdam

Maybe there is no AI bubble

Despite AI companies’ utter incapacity to generate profits, investors continue to pour capital into data centers. If the financial bubble does not burst, perhaps it is no bubble at all, but classical economics may be ill-suited to explain what is unfolding.

Never tell an AI you’re from Naples

Today, I take a look at geographic prejudice among four open-weight LLMs. After reading, you might want to edit your resume to put more emphasis on anything you can find related to Stockholm or Amsterdam.

Prunksaal der Österreichischen Nationalbibliothek

Large language models as attributes of statehood

As governments champion AI, the development and control of large language models are becoming matters of state. European countries are investing heavily in language resources, with geopolitics casting a large shadow over this endeavor.

Despite plenty of renewable energy, data centers split Norwegian society

In a country with abundant hydro- and wind power, as well as plenty of water, rising electricity needs pit industries against each other. Parts of civil society attempt to slow down the building craze, to little avail.

An advert for Palmona, a brand of artificial butter, ca. 1914

What artificial butter tells us about Artificial Intelligence

A person cleaning the entrance of an office building.

Platform work regulation is failing cleaners

Researchers studying algorithmic management have long had a blind spot when it comes to cleaning apps’ workers. The evidence now emerging paints a nuanced picture: algorithms play a far smaller role than they do for drivers and couriers, yet cleaners still face severe consequences.

close-up photo of Goosebumps Slappy the Dummy ventriloquist doll

Are all LLMs the same? I set out to measure their similarity.

AI-generated text often has a certain je-ne-sais-quoi of monotony and blandness, whatever model is used. An attempt to put a number on this phenomenon shows once more that measuring meaning is close to impossible.

3 Pfeile symbol on red-purple background

Why left-wing parties are not pushing for GenAI

I'm often asked why there are so few positive visions of AI. After all, machines can make work less tedious and, provided the benefits are shared fairly, increase the well-being of all. But generative AI does not quite work that way.

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.

An AI slop farm stole my identity

AI slop is everywhere. By one estimate, roughly one in five articles referenced on Google Discover is automatically generated. It's less known that AI slop farms steal the identity of existing journalists and influencers to increase the odds of being picked up by these algorithms.

Excerpt from Fingerprints by Francis Galton

Remote biometric identification is a colonial boomerang

While many feel – rightly – outraged by how live face recognition and other remote biometric identification infringes on our freedom in public spaces, I think it misses – in part – the point of the technology. To understand its purpose, we need to look into colonial history.

Albanian Prime Minister Edi Rama

AI avatars have been in politics for a decade, but Diella is different

Albania’s new “Minister for AI” is an LLM-powered avatar. AI has been touted since at least 2017 to replace politicians, but the Albanian software, which shall be in charge of public procurement, opens the gates to corruption on a new scale.

View of the French lower house of Parliament.

For politicians, TikTok’s algorithm is a handy bogeyman

A parliamentary report about TikTok has received lots of media attention in France in the last few weeks. It is interesting in its own right, and it's also a prime example of how many European politicians behave when faced with an issue related to algorithms.

A screen in a church with a racoon displayed.

AI Jesus: What’s left of Pope Francis’ guidance?

A church in Austria put an AI-generated exhibition on display. This does not exactly fit with the Vatican's guidelines on AI.

Image Generators Are Trying to Hide Their Biases – And They Make Them Worse

In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.

An illustration of the protagonists of the story. Top: Camille Lextray, Jerzy Afanasjew. Bottom: Anastasia Dedyukhina, Miriam Al Adib, Soizic Pénicaud.

Fighting in the Dark: How Europeans Push Back Against Rogue AI

Automated systems go astray, they contribute to child sexual violence, deny people their social benefits, or block organizations' online presence. The affected people often feel helpless when their rights are violated, but some are taking up the fight while current laws fail to protect the victims.

How Not to: We Failed at Analyzing Public Discourse on AI With ChatGPT

We wanted to collect data on how newspapers wrote about AI. Swayed by the sirens of OpenAI, we thought that using a Large Language Model was the best way to do such a massive text analysis. Alas, we failed. But LLMs are probably not to blame.

The year we waited for action: 2023 in review

Exactly one year ago, I wrote that automated systems might be regulated for good in 2023. This was too optimistic. Not only did European institutions fail to pass the AI Act, even in its watered-down version; the rise of generative models brought us to a new level of danger.

Some image generators produce more problematic stereotypes than others, but all fail at diversity

Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.

AlgorithmWatch welcomes first fellows in algorithmic accountability reporting

After the announcement that we offer a fellowship in algorithmic accountability reporting, we received over 100 applications and were overwhelmed by the quality and diversity of them. After a careful examination, we’re very happy to introduce our first six fellows, an outstanding group of journalists, academics, and a civil society activist.

The year automated systems might have been regulated: 2022 in review

Automated systems were surprisingly absent from this year’s major stories. On the regulation front, European institutions stepped up their efforts. How much change Europeans can expect depends on the institutions’ resolve, and the first test of 2023 already began.

Wolt: Couriers’ feelings don’t always match the transparency report

In August, the Finnish delivery service Wolt published its first “algorithmic transparency report”. We asked three couriers about their experiences. They don't always match the report’s contents.

Algorithmic elections: How automated systems quietly disenfranchise voters

In the United States, automated systems purge voters rolls and verify signatures. They sometimes discard valid votes, especially in historically marginalized communities. Such systems are unlikely to be as widespread in the European Union.