An abridged version of this text was originally published as an op-ed at Thompson Reuters Foundation News.
On 9 March, a Russian airstrike tore through a maternity and children’s hospital in the Ukrainian city of Mariupol, killing at least three people and injuring several more. Pictures of the bombing’s victims shocked the world, including that of an injured pregnant woman cradling her womb as she is being carried on a stretcher, shortly before she and her baby perished.
As news reports of the hospital bombing began to circulate, the Russian Embassy in London took to Twitter to cast doubt on their validity, with unfounded claims that the hospital was non-operational at the time of the attack, and that the horrifying photographs had been staged. At that time, Twitter and other social media platforms were scrambling to deal with the threat of pro-Russian disinformation that had already been brewing long before the country’s invasion of Ukraine. Though Twitter removed the embassy’s posts, false pro-Russian narratives such as these have continued to flood social media and find traction with audiences, propelled by the platforms’ recommender algorithms.
The information warbeing waged online around the conflict in Ukraine is but one of many troubling examples of how disinformation can flourish on online platforms, which poses serious risks to the social and institutional fabric of the EU. Yet even as major platforms like Facebook, YouTube, Twitter, and Instagram have become hugely influential on the way in which we debate and deliberate in our democratic societies, Big Tech companies have continued to essentially regulate themselves when it comes to managing the information that flows through their services.
This state of affairs has put enormous power in the hands of just a few corporations, who timeandagain have shown themselves to prioritize profits over the public interestby willfully ignoring or downplaying the harms that their services cause. Documents leaked by whistleblower Frances Haugen revealed, for example, that Facebook’s revamped newsfeed algorithm was known internally to be amplifying misinformation and violent content on the platform. But Facebook leadership failed to heed the warnings of its own researchers because the new algorithm was good for engagement, which was good for the company’s bottom line.
EU lawmakers recognized the need to change the rules of the game in order to curb the power of Big Tech and hold major platforms accountable for their actions. This imperative has resulted in the Digital Services Act (DSA): a more than 300-page rulebook for tech companies which would force them to do more to tackle illicit content on their platforms, uphold users’ rights, and address the detrimental impact they can have on people and society – or else risk billions of Euros in fines.
How does the DSA propose to deliver on its central promises? Thanks in part to the collective efforts of civil society, the DSA would limit platforms’ most egregious forms of tracking-based advertising and deceptive design practices. It also introduces important safeguards for individual rights such as improved “notice-and-action” procedures for users to flag potentially illegal online content, as well as redress mechanisms for users to dispute platforms’ content moderation decisions.
Perhaps the DSA’s most innovative requirement is for online platforms to identify and tackle so-called “systemic risks” stemming from the design and use of their services – risks like the dissemination of hate speech, the infringement of basic rights as well as intentional manipulation which may have negative effects on civic discourse and electoral processes. Crucially, whether platforms are adequately dealing with these systemic risks will be verified by both independent auditors, as well as regulators and external researchers who may gain access to platform data.
In the past, platforms have repeatedly shown themselves to be indifferent or else outright hostile towardresearchers who sought to scrutinize platform data. But the kind of third-party data access the DSA now introduces is essential to informing the public debate about how platforms influence our public sphere. Platforms cannot be trusted to simply assess themselves – we require external scrutiny in order to illuminate how their automated systems personalize content via recommender systems and targeted advertising, as well as how they deal with illegal content, handle user complaints, and apply their terms of service.
But as with the entirety of the DSA, the effectiveness of these data access rules will ultimately rest on whether EU officials can put the law into practice. Properly vetting researchers to qualify for data access and safely facilitating such access, for example, will require investments in expertise and establishing new protocols that work. It will depend in part on the capacities of future "Digital Services Coordinators" (DSCs) to be set up by EU member states as well as of untested independent advisory bodies. It also remains to be seen to what extent platforms will invoke a "trade secrets" exemption, which is explicitly mentioned in the DSA, to deny data access requests – and whether regulators will intervene on behalf of researchers in such cases or turn a blind eye.
Such enforcement will require a strong governance framework with adequate investments in staffing and capacity-building, and must learn from past mistakes in enforcing the General Data Protection Regulation (GDPR). To this end, the DSA shifts enforcement over the largest platforms to the European Commission, and includes a “polluter pays” principle, which will require platforms to pay for their own supervision by the Commission.
Although the DSA is an imperfect document – no doubt influenced by a massive lobbying campaign from Big Tech companies – taken as a whole, it represents a potential paradigm shift in tech regulation. Even so, the great promise of the DSA will only be realized if its laws can be implemented and enforced in practice.
Read more on our policy & advocacy work on ADM in the public sphere.