Ten years on, search auto-complete still suggests slander and disinformation

After a decade and a string of legal actions, an AlgorithmWatch experiment shows that search engines still suggest slanderous, false and disparaging statements.

Story

3 June 2020

#publicsphere

Nicolas Kayser-Bril
Reporter

Ten years and six months ago, a Paris court ruled that Google was guilty of slander when it suggested to add “fraud” after the name of a company when users typed it in the search bar. (In an older case, in 2007, a Belgian court ruled in favor of Google).

Since then, several individuals, firms and anti-discrimination organizations sued Google and other search engines over slanderous suggestions. In 2012, after a campaign from anti-racist organizations, Google agreed to remove “… is a jew” to the suggestions it made. In 2013, the German Federal Court of Justice ruled that some of Google’s suggestions could be libelous, after the search engine suggested that the country’s first lady could have been a prostitute. In 2016, Google Poland agreed to remove slanderous suggestions after an individual’s name, just before court proceedings began.

In all cases AlgorithmWatch reviewed, Google argued that suggestions were made automatically based on user behavior and that the firm had no control over them. And in most cases, judges noticed that Google could very well remove pornographic suggestions at will, and found it guilty of slander.

Google, alone among search engines, seems to block automated suggestions for some groups of people targeted by institutional discrimination. In 2017, in an attempt to show its willingness to address the issue, the firm even added a form that lets users report inappropriate suggestions for search queries. (The feature is not available for Google Translate, which also provides automated suggestions.)

A slanderous suggestion for each query

A review of 1,117 queries made on the four search engines with more than 1% market share in Europe shows that automated suggestions are still as inappropriate as ever. AlgorithmWatch submitted 33 queries in three languages on seven individuals and five groups (for instance “the French are...” or “Angela Merkel ist...”) and stored the suggestions the search engines returned. The queries were made in a “clean” browser over Tor, a system cloaking the origin of the request, in order for results not to be influenced by personalization or localization.

The search engines returned 5,365 suggestions, of which 1,789 were unique. They were then classified as either slanderous or demeaning (e.g. claiming that someone is ugly), or as containing false statements (e.g. claiming that someone is dead). The data is freely available, as well as the source code of the experiment.

Taken together, search engines returned almost one slanderous suggestion for each query, and a false statement for every other query.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

Yahoo produced more inappropriate suggestions by a wide margin. The relatively low number of slanderous or false suggestions returned by Bing could be due to the search engine’s very high latency, resulting in some suggestions never showing up in the user’s browser. Considering only the 762 queries for which search engines did return a suggestion, Bing provided more slanderous suggestions than Google.

A spokesperson for Microsoft, who reviewed the data, thanked AlgorithmWatch for “helping [them] to improve Bing”. “We strive to keep potentially offensive suggestions out of our results and have taken actions on those flagged, which were in violation of our policies. We’re also following up to make sure those actions are having the desired effect,” they added.

Under the influence

Although Google claims that its suggestions are not suggestions but “predictions”, search engine optimization professionals agree that users are influenced by them. They recommend that websites create a page for each suggestion Google provides, in order to be visible on the first results page, whichever suggestion users choose.

Some companies even offer poorly paid workers on Amazon’s Mechanical Turk to manually search for specific terms, in an effort to drive down any unappetizing suggestion that Google may serve.

Absent an audit of the autocomplete features of search and translation engines, it is impossible to say exactly how they work and how they impact users. But as Safiya Umoja Noble wrote in her 2018 book, “Algorithms of Oppression,” search results and suggestions shape social values as much as they reflect them.

Google, Verizon (which owns Yahoo) and Yandex did not answer requests for comments.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.