Are Google AI Overviews killing media pluralism? AlgorithmWatch amongst first organizations to investigate that.
Google could destroy the web traffic which is the life-blood of organisations who produce reliable information. This is thanks to their new AI Overviews – a tool that hides links to real websites, in favour of their own summaries which have (for example) claimed Olaf Scholz is still the German Chancellor. AlgorithmWatch is making one of the very first data access requests under new EU rules to investigate these systemic risks to media freedom.

Google has rolled out AI Overviews as a default in their search product. These Overviews appear right above – and block out – links to real information, meaning users need never bother to go to any actual sources. Traffic to sites, from journalistic outlets to expert organizations, is already drying up – a severe risk that producing reliable information will simply not be a feasible business model anymore.
What will replace these websites? An unreliable, intransparent AI tool. Google claim they won't show AI Overviews for news or political topics. But when we tried, in September, to search for information about the German government, we easily got AI Overviews – including multiple ones telling us Olaf Scholz is still Chancellor of Germany. In our more recent tests, we are no longer getting AI Overviews as commonly for politics topics. As so often with AI, it is unclear how these tools work and what decisions are being made by their designers – and whether the risks are being properly accounted for.
The fact it now seems harder to get politics-related AI Overviews may be a response to a complaint we, as part of an alliance of NGOs, associations, and media organizations, sent to the German Digital Services Coordinator earlier this month, complaining about the rollout. Or the change may be for another reason, such as Google only doing sufficient risk mitigations too late. Google should have done risk assessments of the tool before rolling it out, as required under the EU’s Digital Services Act (DSA). But whether and how they did remain mysterious. This is unacceptable for such a massive risk to our information environment.
This pattern – sudden, intransparent changes, and reactive rather than proactive risk mitigation – are familiar features with new AI products. They are a continual struggle in our fight for technology which strengthens, not weakens, democracy and access to reliable information.
Fortunately, this week has also seen the introduction of new data access rules under the DSA, the EU’s flagship regulation of online platforms and search engines. The new rules under Article 40.4 of the DSA allow non-commercial research organizations to request internal company data for investigating so-called “systemic risks” – including the systemic risk to freedom of information and media pluralism that we believe AI Overviews pose.
Therefore we have made, on the first week these rules take effect, one of the first 40.4 Data Access Requests to ask how many people are still visiting websites after searches, versus those simply staying inside Google Search and reading the AI Overviews. The request has been sent to the German Digital Services Coordinator, who must assess if it follows all the necessary requirements for the new rules.
These new rules have come after some delay. They were supposed to be in force 17 February 2024 – and, indeed, we submitted a data access request on that day too, related to the use of Chatbots prior to the European Parliament Elections. But various implementation issues have resulted in a 20-month wait before Article 40.4 properly came into effect. This waiting game is, as we have argued elsewhere, an unfortunate recurring issue with the DSA. While laws must, of course, be implemented with care, the long and unpredictable periods of waiting create a lot of uncertainty for organizations and individuals who wish to use the DSA for its intended purpose.
Finally, we need more confidence that findings from the research of Article 40 can – and will – be used to enforce genuine changes, where they are needed to protect against systemic risks from large tech companies. As we have previously stated, we feel enforcement of the DSA has not yet been strong enough. There have been some interesting steps by the EU recently against Meta and TikTok, but we have been waiting months for sanctions against the (in our view) clear violations from X. We hope that these new data access rules will be supported fully, and allow research which will show where proper enforcement is needed.