Making sense of the Digital Services Act

How to define platforms’ systemic risks to democracy

It remains unclear how the largest platforms and search engines should go about identifying “systemic risks” to comply with the DSA. AlgorithmWatch outlines a methodology that will serve as a benchmark for how we, as a civil society watchdog, will judge the risk assessments that are being conducting at this very moment.
A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there.

It is hard to deny that dominant internet services provide essential consumer goods for much of the world. Having to go a week, much less a day, without efficient Google searches, or losing the ability to interact with others via a social media platform like Instagram, would surely upend many people’s personal and professional lives. Despite their benefits, however, these and other powerful internet services are radically transforming our capitalist society into, potentially, its worst version yet. In so doing, the platform economy gives rise to a myriad of risks which may negatively impact individuals and democracy itself. 

Assessing risks

The European Union’s lawmakers are convinced that platforms can pose such risks, and have therefore enacted the Digital Services Act (DSA), a law that requires so-called Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to “diligently identify, analyze and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services.” These risks include, but are not limited to, the dissemination of illegal content through platforms’ and search engines’ services; any actual or foreseeable negative effects for the exercise of fundamental rights; risks on civic discourse, electoral processes, and public security; risks in relation to gender-based violence, the protection of public health, and minors; and serious negative consequences to the people’s physical and mental well-being. When such risks are identified, companies have to take “appropriate measures” to mitigate them.

Guidance is lacking

Whereas the law provides a long list of possible measures to mitigate risks – e.g. “adapting the design, features or functioning of their services, including their online interfaces“ – it is remarkably quiet on how VLOPs and VLOSEs should conduct a risk assessment and what legislators expect from them. The law does spell out that VLOPs and VLOSEs must account for the ways in which certain factors may influence systemic risks, including the design of recommender systems and any other relevant algorithmic systems, content moderation systems, applicable terms and conditions and their enforcement, systems for selecting and presenting advertisements, and data related practices of the provider.

There is, however, no mention of a procedure that would delineate how to identify a systemic risk in practice. Also, the European Commission has not published any guidance for these companies. This means that at this moment, it is very unclear how VLOPs and VLOSEs should assess risks to comply with the DSA’s requirements. VLOPs and VLOSEs are required to hand in their first risk assessments to the European Commission until August of 2023 – it must be said, though, that it is unlikely they will make these assessments publicly available. It is also unclear what the Commission will release about their contents, and when.

Charting a path forward

This is the context for our paper. We will not attempt to provide a holistic assessment of the goods and the ills of digital society as shaped by currently dominant internet services. Our goal is rather narrower and more specific. The purpose of the paper is to determine whether some elements of risk that an internet service generates for freedom of speech and media pluralism are identifiable. Our hope is that with this contribution, we provide a tangible point of departure for the discussion about what different stakeholders can and should expect from a risk assessment, and how it could be done in practice.

Read more on our policy & advocacy work on the Digital Services Act.

Michele Loi (he/him)

Senior Research Advisor

Photo: Julia Bornkessel

Michele Loi, Ph.D., is Marie Sklowdoska-Curie Individual Fellow at the Department of Mathematics of the Politecnico Milan with a research project on Fair Predictions in Health. He is also co-principal investigator of the interdisciplinary project Socially Acceptable and Fair Algorithms, funded by the Swiss National Science Foundation, and has been Principal Investigator of the project "Algorithmic Fairness: Development of a methodology for controlling and minimizing algorithmic bias in data based decision making", funded by the Swiss Innovation Agency.