EU Commission asks foxes to stop eating chickens but does not build fence

The European Commission published a "Guidance on Strengthening the Code of Practice on Disinformation" aimed at large tech companies on Wednesday. The wish-list of measures forgoes any enforcement mechanism.

Nicolas Kayser-Bril

In mid-2018, the European Commission and a lobby for large tech companies collaborated on a Code of Practice on Disinformation. The document, written by Siada El Ramly, the head of DOT Europe (then called Edima), a group representing Facebook, Google, Apple and other very large tech companies, set out vague rules to “tackle disinformation”. All measures were voluntary, and no enforcement mechanism was planned.

The Commission followed up on the implementation of the Code. In particular, it paid about 250,000€ to a consultancy for a report on the topic published in May 2020. The Commission published another report in September 2020, which summarized the previous one. Both reports acknowledge the progress made by some large tech companies in dealing with disinformation but found that much could be improved.

In the “Guidance” published on 26 May, the Commission lists what large platforms should do. Over 25 pages, they ask Facebook, Google and other signatories of the Code to “step up their efforts” to fight disinformation. Adverts should now be considered potential vectors of misinformation – something explicitly outside the scope of the 2018 document – the Commission writes (p. 8). Newsfeed algorithms should stop the viral spread of misinformation, and users should be able to fine-tune recommendation engines if they so wish (p. 14). Messaging services should add a warning when users share disinformation, but this should not come at the cost of breaking encryption (p. 16). In total, the Commission wrote no less than 179 statements telling what big tech companies “should” do.

Wishful thinking

Many of the Commission’s recommendations are in line with current legislative proposals such as the Digital Services Act (DSA) or the Regulation on AI. The Commission portrays its guidance as preparing the signatories for mandatory rules they will likely be subjected to anyway by the DSA. However, it remains unclear why the Commission relies so heavily on the self-regulatory Code of Practice that lacks any enforcement mechanism. Asking them to stop accepting adverts carrying misinformation, for instance, is akin to asking them to forgo profit. While these companies have internal guidelines that ask advertisers not to engage in certain behaviors, investigations by AlgorithmWatch and others have shown that they were not, or not meaningfully, implemented.

Moreover, whistleblowers have shown in numerous instances that large tech companies had little or no interest in upholding voluntary moral standards. In September 2020, a former Facebook employee revealed that the efforts to clamp down on networks of fake accounts were patchy at best and that Facebook’s official data on the topic (uncritically quoted by the Commission) was probably bogus. In December 2020, another former employee said that Facebook refused to implement proposals to improve the quality of the content shown to users because it would have disproportionately impacted right-wing organizations.

The Commission’s guidance also asks big tech companies to allow external researchers to access data. Enabling independent research is doubtlessly key to enhancing transparency on the functioning of platforms. AlgorithmWatch has advocated intensely for legally binding data access mechanisms for public interest research – a call that has (at least partially) been taken up by Article 31 of the proposed DSA. However, absent stringent legal obligations, it remains doubtful whether platforms will in any meaningful way allow for such external scrutiny. In late 2020, Google introduced new internal rules that prevent researchers from autonomously investigating sensible issues, including race. The document also asks researchers to strike a positive tone when talking about automated systems. It is unclear why the company would allow external scholars to perform research that is off-limits for its own employees.


In its guidance, the Commission argues that the European Digital Media Observatory (EDMO) should coordinate efforts to implement the Code of Practice. EDMO is presented as an independent and well-established organization. While EDMO brings together two universities and two for-profit companies that have expertise on the topic, it was set up by the Commission in 2019 and its budget of €2.5m runs out in December 2022. The postdoctoral students and consultants working for EDMO, who depend on that funding for a living, might have a conflict of interest. The Commission has yet to answer our questions on how EDMO would be funded thereafter, nor how a body it exclusively funded could be independent.

Most of the harms identified by the Commission under disinformation can be fought using existing laws. Scams, public insults, deceptive commercial practices or breach of trust are, in most cases, already punishable offenses. If they were adequately funded and had the relevant capacities and know-how, the police and the judiciary of the Member States could go after wrongdoers.

Self-regulation cannot replace laws, and companies cannot replace lawmakers. If self-regulation merely serves as a fig leave, relying on it is not only pointless but can also be counterproductive. It may give the impression that other, more stringent measures are not as urgent as they, in reality, should be. Given the evidence seen so far, the work the Commission put into updating the Code of Practice not only is likely in vain but may also play into the hands of those it wishes to constrain.

Read more on our policy & advocacy work on ADM in the public sphere.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.