Editor's note: This article was written for and first published in the "journalist" magazine.
Single mothers were seen as a particularly high risk. In the Dutch port city of Rotterdam, an algorithm spent years combing through the data of welfare recipients to determine which of them were most likely to be engaging in welfare fraud. According to a joint reporting project conducted by Lighthouse Reports and WIRED magazine, factors that significantly contributed to a person being identified by the algorithm as a high risk of welfare fraud included: being a woman, being a parent, lack of fluency in the local language and difficulty in finding a job. In this case, reporters were able to acquire and analyze the machine-learning model from the city, along with training data and user manuals. It provided a valuable look inside the engine room.
Governments and authorities around the world are increasingly deploying algorithms, and most of them are completely opaque. They have a significant influence on who must accept welfare payment cuts, who is flagged for a tax audit, who lands on the police radar, how long a potential prison sentence ends up being, and who receives a loan or a mortgage – and who does not. Algorithms make predictions as to the future success of schoolchildren, forecast health risks, and determine what social media content is prioritized.
In an increasingly digital society, one central task of journalism should be that of examining how such algorithms work, laying bare their discriminatory effects, and exploring how politicians, government agencies, and software producers are deploying them. “What we generally lack as a public is clarity about how algorithms exercise their power over us,” (emphasis in the original) computational journalism expert Nicholas Diakopoulos warned in 2014. “While legal codes are available for us to read, algorithmic codes are more opaque, hidden behind layers of technical complexity”. At the time, Diakopoulos called the merging journalistic field that exposes power structures and bias effects around automation, "Algorithmic Accountability Reporting" (see also journalist 6/2018).
Since then, an increasing number of articles around the world have been focusing on the automatization of society and its effects. The United States has even seen the birth of a media outlet, called The Markup, focused entirely on investigative tech journalism. “Big Tech Is Watching You. We’re Watching Big Tech,” is how the non-profit newsroom describes its mission. Algorithms, artificial intelligence, and algorithmic accountability have become trendy issues in recent years at many journalism conferences. And some media outlets have even hired specialized reporters dedicated to the AI beat. AI reporter Madhumita Murgia, for example, writes for the Financial Times, while the news agency Bloomberg recently advertised for the position of “AI Ethics and Policy Reporter”. Only very few German outlets, though, conduct investigations into the power that algorithms wield over our everyday lives.
“Increasingly, we are faced with self-learning algorithms and with systems that have black-box characteristics, making it all the more difficult to conduct effective research.” Christina Elmer, professor at TU Dortmund University
“It’s complicated,” says Christina Elmer, a professor of digital journalism and data journalism at TU Dortmund University. “Though there are some projects, the field has not developed much in recent years – even as we are increasingly faced with self-learning algorithms and with systems that have black-box characteristics, making it all the more difficult to conduct effective research.” (Transparency note: Christina Elmer is a shareholder at AlgorithmWatch.)
An initial approach by Algorithmic Accountability Reporting attempting to analyze algorithms via reverse engineering had to be quickly discarded by journalists, says Elmer. Particularly with algorithms deployed to make hyper-personalized recommendations on platforms like TikTok, for example, it is challenging, she says, to measure discriminatory effects or even to prove their existence. Elmer says journalists have to invest significant energy in developing a quasi-scientific design to produce well-founded results. As such, algorithmic accountability reporting projects tend to be extremely complex and time-consuming – and many media outlets simply lack the resources. “The teams in Germany and internationally that engage in algorithmic accountability reporting tend to be groups of data journalists,” says Elmer. “Precisely those reporters who had more than enough to do in the last three years with COVID, renewable energies, and climate change along with the construction of dashboards.”
Elmer spent many years working for the German news magazine Der Spiegel in various capacities, including head of development, member of the editorial board for Spiegel Online, and head of the data journalism department. She also took part in OpenSCHUFA, one of the first large algorithmic accountability reporting undertakings in Germany. For the project, the Open Knowledge Foundation Deutschland and the NGO AlgorithmWatch called on German citizens in 2018 to request their credit ratings from Schufa, the leading credit rating agency in Germany. The donated data was then examined by data teams from Der Spiegel and the German public broadcaster Bayerischer Rundfunk (BR). They were able to show how many people were declared credit risks through automated processes, thus making it harder for them to secure mobile phone contracts, home loans, and rental apartments. “If you want to produce quality reporting and tell the story in a detailed, exciting, and relevant way, you have to invest a lot, and you might not always get all that much back,” says Elmer. “Reach and the number of subscribers they attract are often, unfortunately, not as high as the stories might merit.”
That makes it challenging to communicate the relevance of such tech research to decision-makers in editorial departments. “If we want to sustainably boost algorithmic accountability reporting, we’ll need to look at alternative financing models, because otherwise, these stories are frequently neglected in favor of other issues,” Elmer believes. Many of the stories produced by non-profit newsrooms like ProPublica and The Markup would never have been reported by profit-driven publications.
Data journalist Uli Köppen from BR also says that the AI hype at journalism conferences is not always reflected in the day-to-day decisions made in the newsroom. “Algorithmic accountability reporting is emerging from its niche and is becoming an element of tech reporting,” she says. “It is being massively hyped and is considered cool, but very few outlets really invest resources into it.” Köppen, though, believes that the trend toward automation should be treated more like climate change, which is no longer just a science issue and has instead become anchored in all sections. “There is a need for specialized teams that can do deep dives, but there also needs to be an awareness in all sections that AI is an issue that affects all of society,” Köppen says.
“For newsrooms, AI issues are frequently less attractive because they are abstract and difficult to illustrate.” Karen Naundorf, AI accountability fellow at the Pulitzer Center
Köppen is head of the first-ever team assembled by a German media outlet to specialize in automation and artificial intelligence: the AI + Automation Lab at BR. “You don’t necessarily need your own AI team, you can also integrate the issue into existing data teams, but for all technology reporting projects, you need plenty of time and expertise – meaning team members who have certain backgrounds in statistics, programming, beat journalism or a deeper knowledge of AI and AI regulation,” says Köppen.
Such specialized knowledge, however, isn’t yet part of journalism training programs. “How young people are attracted to the profession and the way they are trained must, and will, be changed,” demanded a recently published (German language) report called “Journalism Education for the TikTok Generation,” compiled by the initiative #UseTheNews from the German news agency dpa in cooperation with the Hamburg University of Applied Sciences and based on interviews with 20 media experts. “Some places seem to still be digesting digitalization and here comes the next revolutionary technology, in the form of machine learning and artificial intelligence, which will have a significant and lasting impact on society and journalism,” the report states. And Tech specialists often never even think about looking for a job in media – in part because salaries aren’t always particularly attractive. “There aren’t enough talent pipelines between technical degree courses and media outlets,” Uli Köppen agrees.
The AI + Automation Lab at BR was established on top of the existing data team. The three teams – BR Data, BR Reporting, and the AI + Automation Lab – work in an interdisciplinary manner and cooperate in varying constellations. They develop both content and products, such as data or AI applications. But how much time is left over for algorithmic accountability reporting if the teams are also involved in product development? Köppen says quite a bit. “Because we work in smaller sub-teams, there is always a reporting project going on.”
Köppen says that because of the different mindsets, workflows, and pace of work, the unusual structure can be challenging. “The product team works on a clear schedule. In the investigative team, something always happens that you weren’t expecting, which throws the timeline completely out of whack,” she says. “That presents practical problems which can’t be entirely avoided, but it is great when you have the possibility to create ad-hoc teams from three different areas to work together on a project and apply their specialized knowledge.”
Every reporting project in the AI + Automation Lab at BR begins with a hypothesis formulated as clearly as possible and rooted in pre-reporting. A research plan outlines how information is to be collected and what strategies make the most sense. “Extremely complicated issues are involved. You could invest years and write a Ph.D. thesis about them – but we have to remain pragmatic and give ourselves four weeks,” says Köppen. “If we are then able to produce a decent hypothesis, then we keep going. Otherwise, we don’t.”
The goal is always that of lifting the veil and discovering something new about the use of algorithms. But if the team isn’t able to produce an investigative scoop, then the result is sometimes an explanatory piece. To the degree possible, project findings are presented in an interactive and attractive manner. Recently, for example, the BR team demonstrated that photos of women are much more likely to be rated as erotic by image recognition programs than photos of men. In a video accompanying the story, a male BR reporter can be seen putting on a bra – and the image is quickly rated as being “very erotic.” A test of video recruiting software demonstrates – using videos of an actress – how the personality profiles of applicants change as soon as they put on eyeglasses or a headscarf, or when they stand in front of a white wall instead of bookshelves. It shows that the AI behind the program can be easily deceived by external factors.
Uli Köppen sees the lab’s mission as being primarily an educational one, raising society’s understanding of the impact that algorithms have. “We want people to understand that they could be victims of algorithms,” she says. Furthermore, her team is trying to spread their findings as widely as possible – and to explain them as simply as possible so that their audience “doesn’t need a degree in computer science.”
“You have to talk to people to learn how they experience the systems.” Nicolas Kayser-Bril, a reporter for AlgorithmWatch
Nicolas Kayser-Bril also thinks it is vital to approach the issue from the perspective of those affected by the technology and not from the technology itself. “You have to talk to people to learn how they experience the systems,” says Kayser-Bril, who has been reporting on automatization for the NGO AlgorithmWatch since 2019. He is bothered by the focus he believes many journalists continue to have on technology. “If there is a problem with the technology, they usually trace it back to a problem with the platform or with the algorithm,” he says. “But if they only view problems from a technical point of view, they are writing for the industry – because technical problems can be solved by the industry.” Journalists, he believes, should focus more on societal problems, and question the degree to which those problems can be solved by technology.
Kayser-Bril himself started out by highlighting concrete problems with certain services with his reporting – such as racism embedded in Google Vision. He demonstrated that the image recognition software incorrectly identified an electronic device in the hand of a Black man as a weapon, whereas the same mistake was not made in the case of a white subject. The experiment went viral on Twitter. “Doing things like that is simple and it is well-received by the audience,” says Kayser-Bril. “But it is essentially just free quality control for the companies, which is why I don’t do that anymore.”
For Kayser-Bril, the hype surrounding ChatGPT is an example of the ongoing inadequacies in the press coverage of algorithms. “In December, a huge number of journalists walked into the trap laid for them by Open AI,” he says critically, adding that there were too many interviews with ChatGPT and not enough critical analyses. An analysis performed by the Reuters Institute of more than 700 reports in the British media revealed back in 2018 just how significant the industry’s influence over AI reporting is: Around 60 percent of the stories focused on products, initiatives, or announcements from the tech industry. A third of the sources came from industry – twice as many as the number from scientific institutions.
New journalism grants have been established to promote quality reporting on algorithms: At the beginning of 2023, AlgorithmWatch began providing algorithmic accountability reporting fellowships to journalists in Europe. (Transparency note: The author of this piece was a fellow at AlgorithmWatch.) Since last year, the AI accountability fellowships awarded by the Pulitzer Center, based in the U.S., have been funding international reporting projects. The 10-month reporting grants are funded by up to $20,000. Furthermore, the Pulitzer Center is developing the Artificial Intelligence Accountability Network for journalists around the world who report on AI.
Karen Naundorf, an AI accountability fellow at the Pulitzer Center, believes that journalists should always weigh the weaknesses of AI applications against their possible benefits. The program helps her do so, through regular online meetings with other fellows, for example, in addition to training sessions and assistance from mentors. “I certainly won’t ever become a programmer, and that’s also not my goal,” says the German journalist who works as a correspondent in Buenos Aires. “I believe in classical reporting work. But only those with a basic understanding of AI issues can work together in a team of specialists.”
Naundorf is researching the effects of automatization on public security in South America. “Politicians there frequently hold up AI as a magic bullet against high crime rates, and such claims are rarely challenged,” she says. In her reporting on the issue, Naundorf works together with photographer Sarah Papst. “AI issues are frequently less attractive for publications because they are abstract and it is difficult to find images to illustrate them,” Naundorf says. In their reporting in recent months, the duo has repeatedly run into obstacles. “There is a lack of transparency no matter where you turn, and it is often pretty much impossible to acquire detailed information on the technologies,” she says.
Partnerships between media outlets and academic facilities can also help complete complex reporting projects. “We are always looking for sparring partners at universities where we can get feedback on methodology to see if what we are doing makes sense,” says BR's Köppen. Christina Elmer of the Technical University Dortmund is involved in the Science Media Center Germany, which supports journalists for free with their reporting on scientific issues. She believes that a similar think tank in the area of AI would be helpful.
Media outlets, Elmer believes, have to hurry if they don’t want to be left behind. But societal interest in the issue is also a necessity – and so, too, are policymakers who are prepared to introduce regulations to ensure that it all moves in a productive direction, Elmer says. “Otherwise, you regularly trigger scandals when you highlight problems, but they quickly fade.”
Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!