AlgorithmWatch proposals on mitigating election risks for online platforms
Despite hopes that the Digital Services Act could protect against online risks during upcoming elections, this looks increasingly unlikely due to delays and issues in implementation. The EU Commission has sought input on how to mitigate election risks, and AlgorithmWatch has responded.
AlgorithmWatch has responded to the European Commission's draft Guidelines for online platforms on mitigating risks in elections. This document comes in the context of upcoming elections, including for the European Parliament. However the Digital Services Act (DSA) is unlikely to be fully effective, with delays in data access powers and no transparency or guidelines on "systemic risk" assessments. This greatly limits the hoped-for ability of the DSA to help find and mitigate risks including increased polarisation, information warfare from hostile actors, and misinformation from AI chatbots.
These guidelines could provide a - weakened, but useful - alternative to a fully implemented DSA. Assessing and mitigating online risks is a complex and fast-moving field. There is no one-size-fits-all approach, and decisions cannot be left to platforms alone - they need to build on independent expertise and sharing of information. Our response therefore proposed the following:
- The guidelines should provide clarity for platforms and external actors on how to meet requirements under the DSA and other related legislation, particularly important provisions around systemic risks and data access which are currently not fully effective.
- The guidelines can point to broadly accepted basic practices without which Very Large Online Platforms and Very Large Online Search Engines (VLOP/SEs) run clear risks of being negligent in the face of election risks, for example: not having access to linguistic or country-specific knowledge; lacking reporting, escalation, and decisionmaking processes which can effectively address emerging risks at sufficient pace; or failing to protect against highly predictable risks in a given election.
- Beyond this, the guidelines should not try to direct VLOP/SEs towards specific or novel approaches in a generalised fashion - e.g. saying platforms “should use watermarking” or “should apply inoculation measures”. The question of “best practices" against election risks in digital environments is too complex, context-sensitive, and fast-moving for such an approach.
- The guidelines should instead require VLOP/SEs, as part of preparations for a particular election, to find, use, and share best research and practices at the given time and for the given context; the guidelines should also propose methods by which this can be accomplished. The DSCs, and Digital Services Board, can play a valuable convening role here even in the absence of full legal designation.
- The guidelines must ensure that processes of collaboration and information-sharing, while valuable and welcome, are coordinated to ensure external experts and civil society groups are not unnecessarily overstretched by duplicate requests from multiple VLOP/SEs.
- For generative AI, based on our past research, we propose:
- For information about elections, generative AI chatbots always return answers completely equivalent to traditional search engines: i.e. lists of links to sources, ranked for quality, relevance, and reliability, rather than attempt to summarise and reproduce information in a probabilistic fashion.
- There should be straightforward and stable ways to automate generation of repeated answers to prompts to allow for external assessment; and access to volume-over-time and location-specific data on how many people are querying chatbots about particular topics or keywords.