A Dual Track Approach to Systemic Risks ensures flexibility and transparency in the Digital Services Act
Assessing systemic risks under the DSA requires a balance between flexibility and standardization. Digital services resist traditional systemic risk modeling approaches, and we cannot expect research to converge on clear consensus for topics such as “election interference” or “freedom of expression”. Flexibility is needed, to account for the range of technologies, risks, research methods, possible mitigations, and more. Demanding specific approaches could constrain relevant and creative approaches; and could even lead to VLOP/SEs and research optimizing for specific metrics rather than for genuinely good outcomes.
However, standardization also plays important roles: ensuring clarity of process, objective treatment by regulators, and in analyzing trends across digital services. This balance between flexibility and standardization must be kept in mind as the DSA is implemented; we must avoid an unintentional drift towards one or other extreme.
We therefore propose a “dual-track approach” to assessing and mitigating systemic risks. The first track is a broad research process. Interpretations of whether research is relevant to “systemic risk” must be kept broad, especially for data access applications under Article 40 and when research is funded (or otherwise supported). This will ensure an evolving and creative landscape of assessment and mitigation methods.
The European Centre for Algorithmic Transparency and the Digital Services Board should play a role in collating and assessing the general quality of research methods, evidence, and proposals, through transparent processes. This should enable systematic comparison of risk assessment and mitigation approaches – both across platforms and over time – to build a dynamic evidence base for good practices without specifying one single “best practice.”
The second track is the compliance process. The process by which risk assessments and mitigations are judged – especially by the Commission and Digital Services Coordinators – must be clearer and more transparent. This can involve standardized processes for assessing whether research and methods in the assessments are “good enough” – particularly when compared against a wider body of quality work collated in the research track above.
In other words, assessments need not all take the same approach or use the same metrics. But they should be able to justify why they used certain approaches and reached certain conclusions, given alternatives or different conclusions recorded in work produced under the research track. There would also give be opportunities for external actors to challenge conclusions, again by pointing to the collated body of quality work under the research track. Taken together, this dual track approach can ensure the diversity of the research community can effectively support digital services and regulators in driving up the quality of risk assessments and mitigations.