Risky business: How do we get a grip on social media algorithms?
Public scrutiny is essential to understand the risks that personalized recommender systems pose to society. The DSA’s new transparency regime is a promising step forward, but we still need external, adversarial audits to hold platforms accountable.
If social media is the water we’re swimming in, there are more than a few reasons to be alarmed by its undercurrents. Hate speech is trending on Twitter following Elon Musk’s chaotic takeover of the company. TikTok—the most influential platform among teenage users—bombards vulnerable young people with content promoting eating disorders and self-harm. And despite early warnings, a range of platforms including Facebook and YouTube continued carrying extremist content that helped enable the January 8th antidemocratic insurrection in Brazil—and turned a profit in doing so.
These examples are the results of a growing collection of public interest research that call out social media platforms for their role in facilitating the spread of toxic content. Now, thanks to the Digital Services Act (DSA), there is expanded scope for researchers seeking to formally access and make sense of platforms’ internal data. This kind of research will be crucial to help identify risks emerging from online platforms.
While the DSA’s new transparency regime is a promising step forward, it will take some time before we know its true effectiveness. Meanwhile, our collective ability to hold platforms accountable will continue to rely on the work of adversarial researchers—researchers who are capable and willing to employ tools that shine a light on the inner workings of platforms’ opaque algorithmic systems.
What kind of social media algorithms are we talking about?
One of the main features that social media platforms use to hook and monetize our attention is what is known as recommender systems. Recommender systems are composed of a set of algorithms which work together to harvest our data and present us with highly personalized, scrollable feeds of content. Many times, these systems have a clear underlying logic: to keep users engaged for as long as possible in order to show us more ads and thus rake in greater profits.
Such personalized recommender systems have the power to influence our opinions and our behavior, particularly as more and more people come to rely on them for news, information, and social interactions. Given the influence that platforms wield, we need to understand how they are fueling serious societal risks like the spread of disinformation, hate speech and gender-based violence, just as we need to understand the risks they pose to mental health and democracy itself. But how can we do that?
Until recently, our ability to monitor social media algorithms has largely depended on platforms’ voluntary disclosures, either in the form of highly curated transparency reports or through limited (and often unreliable) data sharing with select researchers. But there is much more to the story than what the platforms willingly tell us. We know this thanks to revelations from investigative journalists, independent researchers, and whistleblowers, many of whom have worked at considerable personal risk.
EU regulators recognized that we can’t simply rely on the goodwill of platforms to be transparent about how their algorithmic systems work or to take appropriate measures to mitigate risks. Now, they have promised a new era of public oversight and accountability for these platforms under the Digital Services Act (DSA).
Among other things, the DSA will obligate the largest platforms like YouTube, TikTok, and Facebook, to formally assess and report on how their products, including recommender systems, may exacerbate risks and to take measurable steps to prevent them. These self-assessments won’t simply be taken on faith—the largest platforms will also be forced to share their internal data with independent auditors, EU and Member State authorities, as well as researchers from academia and civil society to provide further scrutiny.
The ink has barely dried on the DSA, however. There is still much work that needs to be done before it becomes fully applicable across the EU in February 2024.
Why we can’t wait for official audits to hold platforms accountable
Meanwhile, the societal risks that emerge from social media are not going to go on pause to wait for an official audit. That’s why researchers and civil society organizations are conducting so-called adversarial audits to try to understand how algorithmic decision-making systems are influencing society. Together with partner organizations, AlgorithmWatch currently conducts the DataSkop investigation into the TikTok “For You” page, in which we are inviting TikTok users to donate their data so that we can better understand the platform’s recommendation logic. We want to know how and where trends and niches emerge on TikTok, and whether there are any indications that the platform is spreading certain content into user feeds.
Such a data donation project can be regarded as a type of external, adversarial audit, in that it does not rely on cooperation from the platforms or access to their internal data in order to glean insights into how their systems work. Adversarial audits have shown, for example, that Instagram disproportionately pushed far-right political content via its recommender system and that TikTok promoted wartime content that was supposedly banned to users in Russia.
While adversarial audits alone cannot capture the full story of how recommender systems work (they may, for example, be limited to publicly available data or suffer from selection bias), they also have certain advantages to regulated data access, like giving researchers more flexibility to reformulate research questions after the data has been collected. And adversarial audits provide one of the few available means to independently verify whether the data and compliance reports provided by big tech companies are accurate.
Adversarial audits have a proven track record. They are crucial to exposing risks on social media, raising public awareness and putting pressure on the platforms to make necessary changes for the good of their users and the public. This kind of research will remain an important complement to regulated data access in the DSA to expose risks, and it must be protected given platforms’ track record of hostility toward such external scrutiny. At this moment, a wide range of stakeholders from regulators and industry to academia and civil society are grappling with practical questions around how to effectively conduct independent audits of Big Tech platforms within the scope of the DSA—with the understanding that, if left unchallenged, platforms themselves will set the terms. In the meantime, watchdogs will continue the important work of conducting algorithmic audits of Big Tech platforms. We need greater transparency on all fronts if we are to truly hold platforms accountable for the risks that their services pose to society.
Read more on our policy & advocacy work on the Digital Services Act.