Unchecked use of computer vision by police carries high risks of discrimination

At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them.

Nicolas Kayser-Bril
Reporter

Pedestrians and motorists in some streets of Warsaw, Mannheim, Toulouse or Kortrijk are constantly monitored for abnormal behavior. Police in these cities, and many others, connected the video feeds of surveillance cameras to automated systems that claim to detect suspicious movements such as driving on bus lanes, theft, assault or the coalescence of aggressive groups.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

All automated surveillance techniques in use in the cities listed above rely on Machine Learning. This approach requires that software developers feed large amounts of scenes depicting normality, and others representing the situations considered abnormal, to computer programs. The programs are then tasked with finding patterns that are specific to each type of situation.

Spurious correlations

Machine Learning has many applications that are now routinely used, such as reverse image search or automated translation. But the drawbacks of this technique are well known. The software does not understand a situation in the human sense, it only finds inferences in the data it has been given. This is why, after decades of controversy, Google Translates still renders the gender-neutral “they are doctors” in German as “sie sind Ärtze” (masculine) and “they are nurses” as “sie sind Krankenschwestern” (feminine). Google Translate was not programmed to be sexist. The corpus of texts it received happened to contain more instances of male doctors and female nurses.

What is true of automated translation is true of automated image recognition, known as computer vision. On 7 April, AlgorithmWatch revealed that Google Vision, an image labeling service, classified a thermometer as a “tool” in a hand that had a light skin tone, and “gun” in a dark-skinned one. (Google since changed their system).

On 3 April, Google Vision Cloud produced starkly different labels after an overlay was added.
On 3 April, Google Vision Cloud produced starkly different labels after an overlay was added.
On 3 April, Google Vision Cloud produced starkly different labels after an overlay was added.
Results provided by Google Vision Cloud before 6 April.

Spurious correlations can have several causes, according to Agathe Balayn, a PhD candidate at the Delft University of Technology on the topic of bias in automated systems, but most of them likely stem from the training data sets. Computer vision systems rely on the manual annotation of millions of images. This work is often done by workers paid a few cents for each task. They have strong incentives to be fast and to conform to the expectations of their clients, Ms Balayn wrote to AlgorithmWatch. Diversity and subtlety in the training data set suffer as a result.

Misconceptions

AlgorithmWatch asked several vendors of computer vision solutions to police forces what training data they used, and how they ensured that their programs were not discriminatory.

A spokesperson for BriefCam, which is used by police forces from Warsaw to Roubaix, stated in an email that, because the software did not use skin tone as a variable, it could not discriminate. This is a commonly-held misconception. Machine Learning software is designed to find patterns that are not specified by their programmers in order to achieve their results. This is why Google Translate produces sexist outcomes and Google Vision produces racist outcomes although they were not explicitly programmed to take into account gender or skin tone.

BriefCam’s spokesperson added that they used “training datasets consisting of multi-gender, multi-age and multi-race samples without minority bias,” but declined to provide any evidence or details.

The police of Etterbek, in Brussels, uses computer vision to automatically spot illegal trash disposal. A spokesperson for the city wrote that the system did not take skin tone or any other individual trait into account, but failed to provide any information about the training data set their software was built on.

A spokesperson for Frauenhofer IOSB, which powers the automated surveillance of Mannheim, Germany, claimed that their software could not be discriminatory because it relied on a 3-dimensional modelling of body shapes. It analyzed movements, not images, and therefore did not use skin tone, he added. Details on the training data set and its diversity were not provided.

Avigilon declined to comment. One Télécom, Two-I and Snef did not reply to numerous emails.

Invisible issue

Automated surveillance is hard to detect. Police forces have no obligation to disclose that they use it and the calls for tenders are rarely published. In Poland, for instance, AlgorithmWatch was told that any information on the issue was “confidential”. The details of their automated surveillance operation were only available in an article in their internal publication, Police Magazine – which is available online.

This invisibility makes it hard for civil society organizations to weight in. AlgorithmWatch spoke to several anti-discrimination organizations at the local and national level. While their spokespersons acknowledged the importance of the issue, they said they could not address it for lack of awareness among the population and for a lack of monitoring tools.

Meanwhile, automated surveillance has the potential to dramatically increase discriminatory policing practices.

Unaudited

How much automated surveillance impacts discrimination in policing is not known. None of the vendors or cities AlgorithmWatch contacted conducted audits to ensure that the output of their systems was the same for all citizens.

Nicole Romain, spokesperson for the Agency for Fundamental Rights of the European Union, wrote that any institution deploying such technologies should conduct a “comprehensive fundamental rights impact assessment to identify potential biases”.

When it came to computer vision in policing, she was not aware that any such assessment had ever been made.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.