“Risks come not just from technology” – Input to the EU on systemic risks and the DSA

AlgorithmWatch has submitted expert input to the EU on systemic risks stemming from online platforms and search engines. We argued risks come not just from technology, but also (1) the attitudes of companies and (2) lack of transparency around enforcement. This input was at the invite of the European Board of Digital Services and the EU Commission, to help prepare their first report on systemic risks under the Digital Services Act. Read our full response here:

Position

7 April 2025

#dsa #eu

Server with a little "lock" Icon
Andres Atehortua via Flickr - CC BY-SA 2.0
Oliver Marsh
Head of Tech Research

Our Research

AlgorithmWatch has conducted research into risks from Large Language Models, social media platforms, and sub-standard mitigation measures from tech companies.  We also engaged with researchers to report on how they approach systemic risks under the DSA.  This is in our addition to our DSA policy work regarding data access, election guidelines, and stakeholder engagement.

Regarding our work on Large Language Models (LLMs), we particularly wish to flag that (i) these tools are increasingly being integrated into functionalities with relevance for the DSA, in particular as “AI generated summaries” in search engines and (ii) our research has encountered substantial barriers to access to realistic data at scale from chatbots. We urge further consideration of how LLMs can be addressed in the framework of the DSA, including risks from “normal use” and beyond malicious uses by bad actors.

More broadly – we remain concerned that many systemic risks stem fundamentally from internal organizational priorities and practices of large tech companies. Recent engagement with representatives of some VLOPs and VLOSEs has suggested, in line with pronouncements from some CEOs, a growing unwillingness to go beyond what VLOPs and VLOSEs interpret as basic compliance with the DSA (or even to comply at all).

It is vital to make clear the widely-shared underlying aims of the DSA, to ensure an accountable online space in which risks to society and fundamental rights are identified and minimized, and the importance of transparency and opportunities for genuine external scrutiny of platforms and search engines. Further guidance for companies and other stakeholders from the European Commission may help ensure better achievement of these goals, though also risk creating a “bare minimum” which companies can aim for in compliance.

Use of Research in the DSA

We welcome that the Commission is inviting input clearly and openly.  This continues a pattern some of us have experienced, of Commission members proactively engaging with individuals and organizations to seek evidence.  We have heard numerous examples of EUCOM officials engaging with published research, including newly published academic work, and reaching out to its authors. We also welcome that the concept of “systemic risks” and related evidence-collection is being kept broad, thereby allowing a range of researchers – academics, CSOs, journalists, and others – to bring a range of relevant perspectives, including on emerging threats which may not fit easily into existing frameworks. This breadth must be maintained in future implementation of the DSA, including in how Article 40 requests are interpreted, in order to ensure research can match diverse and potentially unexpected risks. 

However, as we have proposed in our ongoing proposal of a “Dual Track” approach to systemic risks, (here in short policy summary, and here a longer discussion produced by academic colleagues) it is important that breadth of the concept of “systemic risk” is balanced by clear and transparent procedures in how this research feeds into enforcement action.  The present sources for understanding this consists largely of press releases; occasional large roundtables (which do not easily allow for dialogue); and ad-hoc engagement which can be more dialogic but is often limited in who can participate.  While these have their place, on their own, these approaches do not constitute transparent and effective ways to engage with researchers. 

In a recent workshop around the DSA and research, featuring many academic researchers, we were struck that many researchers conducting relevant work do not know how to effectively engage the Commission.  It is unclear, for example, how many academic researchers received this call for input. We would particularly point to academics who are conducting systematic reviews, such as Philipp Lorenz-Spreen on polarisation. Such systematic review work is an example of activity that academics can conduct where CSOs are best placed to highlight and identify specific risks, including to particular groups.

By contrast many CSOs feel more able to present evidence to the Commission, but are unclear how it feeds into enforcement.  There is a risk of external researchers over-focusing on clearly salient topics such as social media during elections.  This may miss research gaps in the Commission’s current knowledge base.  Importantly, this may include gaps which the Commission themselves may not be aware of, which cannot be addressed by the Commission stating their preferences for topics.   

There are various techniques that can be deployed to seek engagement in different ways. These could include specific, highly topical engagement which allow for dialogues between the Commission and researchers, and in which the process of participation is transparent.  Vetting of researchers could be used for dialogues which involve information the Commission does not wish to make public, and/or to ensure highly relevant participants. There could be open calls for individual experts, in the manner seen for the EU AI Act.  There is existing research on how to engage experts, and set up dialogic expertise-sharing, which could be drawn on in developing engagement formats.  

There are, of course, risks to over-transparency. But current circumstances leave researchers unclear how (and if) we should plan or conduct work in ways that can support the DSA; and uncomfortable with lack of understanding of how our work fits into a wider body of evidence which may be used for enforcement.  Ultimately the aim must be, as this call for input suggests, to draw on a wide and diverse range of evidence to inform understanding of systemic risks and mitigations.  If underpinned by a clearer and transparent process for understanding how evidence is used in enforcement, this would strongly demonstrate the legitimate and positive contribution of the DSA to supporting a better online environment for all EU citizens.