Happy Birthday, Digital Services Act! – Time for a Reality Check

The EU’s Digital Services Act (DSA) celebrates its third birthday. This landmark digital regulation is meant to give a better understanding of how online services decide what users see, and more powers to challenge the companies. However, there is still work to do to make the DSA meet the needs of citizens and users.

Graphic with a gift box and the text '3 Years Digital Services Act' on a blue background with stars and a large question mark.

We must first acknowledge that all EU tech regulations nowadays must be seen in light of hostile attacks from the Trump administration and allies, as highlighted in a recent open letter by us and over 40 other civil society organizations. Many CEOs are rolling back on existing platform protections, seen most clearly in Mark Zuckerberg’s obviously Trump-sycophantic changes to Meta in early 2025, which we strongly criticized. A recent hearing in the states chaired by Jim Jordan – an ally of Trump, Elon Musk, and others in their efforts to censor speech they don’t like – misrepresented the DSA to be a censorship tool. This is far from being true, as various experts of the DSA noted in response, while the regulation is not perfect. We hear similar attacks from inside the EU, including in Germany

Despite such attacks we have not (yet) seen strong voices in the EU calling for a “simplification” of the DSA, as we have for the AI Act, for example. The DSA has already shown some positive impact, such as TikTok’s decision that the TikTok Lite Rewards program would not meet acceptable standards of risk in the EU. But in general, we have seen delays and gaps in enforcement. Despite flagrant ignorance of the DSA from X – including refusing to provide AlgorithmWatch with data to investigate the scourge of non-consensual nudification services – we have been waiting for enforcement action against the platform for months. The first round of risk assessments required of platform providers were out-of-date and provided little information.

It is important to keep pushing back against misrepresentations of the DSA from forces hostile to European democracy, while also fighting to make the DSA better. So what does the DSA offer, and what are we doing to make it better?  We have focused on two key areas – data access and risk assessments.

More Access to Data

Data access has been a key focus of AlgorithmWatch’s DSA work. Data access is essential to spotting potential risks and harms online. But in recent years, platforms have been trying to cut it off. This is like, to paraphrase the research scientist Philipp Lorenz-Spreen, trying to research climate change with only fossil fuel companies having emissions data.

We are trying to push for more effective and widespread use of the new data access provisions under the DSA. We created a simple 1-page guide for civil society organizations, journalists, and others who want to apply to monitor and research content which is spreading publicly on the largest services. It’s important that many organizations explore and demonstrate the power of the new data access rules - including organizations outside the EU.  So if you think you could use data from large platforms and search engines, please do read our guide and reach out if we may be able to support you.

Our latest effort is pushing the boundaries further – to celebrate the 3rd birthday of the DSA, we have worked with the Mozilla Foundation and the DSA40 Data Access Collaboratory, to pioneer a “mass request.”  This idea built on a data access hackathon we held in April with organizations from across civil society, academia, and fact-checkers. If successful, this will grant multiple organizations access to daily lists of the most viral posts in each EU member state. This will allow us to quickly see what content is potentially having the most impact, and to better understand what sort of content is being most forcefully pushed by algorithms. The power to accept or reject the request lies with the platforms. If they reject it, we will be ready to challenge that decision.

Looking forward, an exciting development could be new rules for researcher access to private platform data. We have been waiting for these rules to be clarified since February 2024, following our initial request alongside other early adopters. The rules should (finally) properly come into force later this month and, if effective, could provide new insights from previously inaccessible information. We provided input into these new rules, based on our experiences of trying to get data on chatbot risks to German regional elections. Our input particularly stressed that the rules must open up a range of research projects to address risks in various specific contexts. This will involve working with the Digital Services Coordinators, the national regulators, to resolve inevitable disputes between researchers and platforms over things like whether data is protected by trade secrets. 

We know they are taking the opportunities and challenges of data access very seriously from our work representing the F5 Civil Society Coalition on the advisory board of the German Digital Services Coordinator, the Bundesnetzagentur. A major question remains how the regulator in Ireland, where most of the large platforms are based, will address all the requests they are likely to receive, and how much resistance the companies themselves will put up. Many of them pay little respect to the existing rules on access to public data. X regularly refuses to give data. Meta created a “transparency tool” for the EU Parliament elections which, when we tested it, turned out to be riddled with issues.  TikTok’s API has shown numerous accuracy and usability issues. Without more genuine engagement in problem-solving from the services, implementation will continue to be challenging and more work will fall onto civil society organizations and partners.

Risk Assessments

A major part of the DSA is that very large services are required to conduct proactive risk assessments. This aims to end a too-common cycle in which journalists or researchers flag harms and services retroactively address them. However, the idea is currently not living up to its promise.  The first risk assessment reports did not, on the whole, provide any real benefit, as we and other partners wrote about in a group analysis here. 

There is also a persistent and deeper confusion about the services’ requirements to assess “systemic risks.”  What distinguishes these from other risks? This question may be clarified over time, but there are traps that need to be avoided. An overly broad definition could lead to unclear enforcement decisions; a very narrow definition, on the other hand, could exclude a huge amount of relevant issues, for instance if a big number of viewers defines potentially “risky” content.  We have proposed an idea, which we call the “Dual Track” approach. It aims to combine the benefits of broad research with clearly-defined enforcement decisions. 

Right now, we focus particularly on systemic risks that generative AI summaries in search engine results pose to reliable media and journalism. Appearing right above the results, these oftentimes unreliable summaries entice users not to go to the source information and search the provided links. Traffic to sources like newspapers and websites might dry up – and as a consequence, reliable journalism as a viable business model put at risk. In an open letter to the Bundesnetzagentur, the risks and the currently unclear state of “risk assessments” conducted by Google were highlighted by AlgorithmWatch and other organizations. Much DSA work has focused on social media platforms. We believe search engines deserve more attention and will be using the tools we have available from the DSA to address this.

A Step towards a Better Internet

The need for an effective DSA is stronger than ever. Regulating technology cannot solve today’s political problems, nor should it be expected to. But the DSA can limit polarization, attacks, misleading information, and other risks to society which companies too often allow to flow through their services. 

Recent history shows that the CEOs of tech companies cannot be relied upon to take these decisions in the interests of citizens, democracy, and human rights. They will rather keep testing products like AI summaries on us. The increasing alignment of many tech CEOs with the anti-democratic forces in the USA – and political forces in Europe who see them as a model – also increases risks of intransparent algorithms and unaccountable moderation decisions enabling anti-democratic censorship.

The DSA can provide tools for many different groups against such challenges. For citizens to understand, shape, and challenge their experiences online. For researchers and journalists to investigate and understand what is really happening online. For civil society and activists to highlight harms and advocate for better protections. For people within companies who wish to do good risk assessment and risk mitigation work, and have the work of these other groups to support them in their struggles to shift their employers' decisions. And for all who wish to design and build towards the promise of a better internet. The DSA alone will not make that happen. But it maybe buys us some space and opportunity to do that work.