
Face recognition as a Trojan horse: Looks Like a Security Vehicle but Undermines Democracy
Face recognition has become a pet project of security hardliners. However, the technology does little to contribute to more security; instead, it jeopardizes and violates fundamental rights.

What do “face recognition” and “remote biometric identification” mean?
The term “face recognition” often refers to “remote biometric identification,” which is the analysis of personal movement patterns or physical characteristics by AI tools to detect specific individuals in a crowd. This might concern facial features, the voice, the iris, or an individual's gait, for example, when cameras capture such biometric characteristics to match these personal data sets from lists of wanted persons. In such cases, the people who are filmed are, in most cases, not aware of their biometric data being analyzed.
The processing of biometric data takes place either in “real time” (when data analysis and data collection coincide) or “retrospectively,” in which case the collected data is analyzed at some point later. But what exactly does “later” mean? After a minute or after a day? No one can explain the difference to real-time analysis, as the term has not been defined yet.
It is easier to say what “remote” in “remote identification” refers to. The data matching takes place away from the place where the data was collected, and the people recorded are not actively taking part in the identification process. Such active participation would occur, for example, if people put their fingers on a screen to have their prints taken. Essentially, an identification is considered “remote“ if there is a physical distance between the data collection and the data processing, and if the people whose biometric data is processed are not actively involved. If, for example, cameras record the physical characteristics of multiple people in an airport, this would be a case of remote identification.
What are the arguments against face recognition?
Creating a fear of exercising fundamental rights
If this technology becomes widely used, there could be a chilling effect on democracy's fundamental rights, which could be particularly at risk if data is processed retrospectively. Governments and authorities such as the police can use sensitive personal data to track people down, observe their doings, and find out who they have met with − over weeks, months, or years. This might, for example, discourage journalists' sources from providing them with important information, as they can no longer be sure of remaining anonymous.
Recent German government coalitions have been working on legitimizing the capturing of all faces on the internet with face recognition. Such a step could deter people from participating in demonstrations in future, as photos of them could be published on the internet and might be used to justify a house search. Regardless, subjecting literally all people to a manhunt is completely out of proportion.

Biometric recognition technologies promise to provide more security, but they discriminate and are not reliable. The use of AI for remote biometric identification of individuals is always disproportionate, as it jeopardizes fundamental rights. In our explainer, you can find a more detailed explaination of terms around this technology.
Illegal databases to end the presumption of innocence
The biometric matching of images of faces with publicly available data from the internet requires the creation of a database that contains all such images: family photos on Facebook, selfies on Instagram, or portraits on employers' websites. The police have already relied on face image databases for a while. A newer development is AI-supported face recognition, known as automated “scraping,” which is the collecting and storing of internet images. This has become a business model. Companies like Palantir, Clearview AI, and PimEyes make money by storing publicly available images from the internet in searchable databases. Data protectionists have been criticizing this practice for years – if only because the people who appear on the images have never consented to the processing for such a purpose.
Ultimately, we all would end up in a police database without any suspicion being necessary. Security agencies must obviously be provided with effective legal authority to investigate and prevent crime. This, however, must not lead to the abandonment of the presumption of innocence, which is one of the fundamental principles of criminal prosecution under the rule of law. Face image databases undermine the very core of the presumption of innocence.
What’s more, the EU’s AI Act prohibits the creation of such AI-supported databases containing face images. In Hungary, a recent case already provided a glimpse at how biometric surveillance systems can be misused for oppression. President Orbán planned to use face recognition to identify participants of Budapest’s Pride parade and prosecute them. This could only be prevented by protests from across all of Europe. People who are critical of restrictive governments are equally put at risk by face recognition technology.
The WeAct petition launched by AlgorithmWatch against mass biometric surveillance has collected more than 50,000 signatures. This is a clear sign of opposition to facial recognition and all other methods that use biometric data for surveillance purposes.

False positives: Deceptive correlations and discrimination
The German Ministry of the Interior wants the German Federal Office for Migration and Refugees to match photos of asylum seekers with face images on the internet. Here, the risk of face recognition software is particularly high and evident: Asylum applications from people in need could be rejected on the basis of misleading images. They might just have been in the wrong neighborhood at the wrong time and were photographed with suspicious or wanted persons, even though there was no contact between them at all.
This example shows how problematic it can be to speak of objective “data analysis” in such contexts. The term suggests a precise and comprehensible procedure, but the opposite is true. Independent data could be brought together in such a way that a false correlation is created. It would be very difficult to refute such unfounded suspicions.
Biometric recognition systems do not work nearly as well as their providers want us to believe. Time and again, they mark people as dangerous who are clearly not. In a test at Berlin's Südkreuz train station, approximately one in every 200 people was falsely classified as a wanted person, which corresponds to 600 false reports per day. Such unjustifiably suspected individuals are subjected to unpleasant checks. False alarms would permanently result in much extra work for the police. The systems are particularly prone to error when it comes to women and people of color, which means that they discriminate against already socially disadvantaged groups.
Is AI surveillance coming?
Cameras constantly scanning people in public spaces and analyzing their individual characteristics would lead to a life as we know it in oppressive states, with people constantly feeling observed and restricted in their movement. In a healthy, pluralistic democracy, people must be free to openly express their opinions, develop their personalities, and engage in political debates without fear of negative consequences. This is guaranteed by fundamental rights such as the right to demonstrate and the right to freedom of expression. The right to data protection and informational self-determination also guarantees that people's personal data does not become a self-service store for others.
Technical systems for surveillance and biometric identification are designed to process data taken from an unlimited number of people in publicly accessible spaces, which amounts to mass surveillance. Public places include parks, streets, shopping centers, and sports facilities, but also platforms such as Instagram.
Germany already had too much experience with totalitarian regimes in its history. It therefore carries a special responsibility not to build infrastructure that can be used to control and oppress the entire population. That is why AlgorithmWatch is calling for a blanket ban on face recognition in public spaces.
Do you believe that fundamental rights must be permanently protected and strengthened? Then become a supporting member of AlgorithmWatch!
With your monthly donation, you support thorough research and effective campaigns. Help us put pressure on decision-makers by confronting them with our expertise!

More information on your supporting membership here.

