The Automated Hunt for Cybergroomers

Algorithms have been developed to help track down cybergroomers, who stalk minors online. But the reliability of automated systems is controversial – and they can even criminalize children and teens.

Foto von Gilles Lambert auf Unsplash

For Jasmine, it’s all about friendship. Almost every day after school, the 12-year-old spends time chatting with friends on German online platforms designed for children – like SchülerVZ, Wer kennt wen or Emo-Treff. But she also chats with strangers. Through her profile, men collect information about her like her age, hobbies, favorite music and favorite color, and they write to her: “Hey cool, I listen to that music too.” She arranged to go out for ice cream with one of them, and he picked her up from school in a car. But instead of going to the ice cream parlor, he continued on into the vineyards. And then, he locked the car door.

“In my head, I wasn’t ready to understand that they wanted sex,” says Jasmin, now 26. During her childhood, 42 men contacted her on the internet, according to a later count she compiled. They were men between the age 40 and 60, most of them with children of their own. They would often spend several months building up trust online before committing rape. Some threatened to kill her dog or sister afterward if she spoke about it. “As a child, I thought: Wow, there are people out there who listen to me and understand me,” says Jasmin, who was removed from her family home at the age of 15 because of domestic violence. “Anyone who experiences sexualized violence coupled with significant brutality doesn’t know their limits – I was totally broken.”

Children of all genders are the targets of sexualized abuse, and in around 75 to 90 percent of cases, the perpetrators are men or male adolescents. The perpetrators come from all social backgrounds. Every app or platform used by children and adolescents is also a gateway for child sex offenders – whether they are forums, social networks such as YouTube, Instagram and TikTok, gaming platforms or classified ad portals. For years now, the number of globally reported cases of cybergrooming – defined as the digital initiation of sexualized contact with minors – has been growing. With increasing digitalization, children are spending more time online and are constantly reachable on their mobile phones. During holiday periods, their online activity increases even more. Children and teens also spent more time in front of screens during the coronavirus lockdowns. A 2022 study by the Media Authority of North Rhine-Westphalia found that one in four minors has been asked to meet online by adults. That figure was one in five in a study conducted just the previous year. Experts say the number of unreported cases could be even higher.

Governments, security agencies and platforms around the world are looking for technical solutions for detecting cybergrooming and other forms of online sexualized abuse targeting minors. The planned EU regulation on preventing and combating child sexual abuse (“Chat Control”) could require platforms offering their services in Europe to automatically search their content for known and unknown content that could involve child abuse and cybergrooming and report them to a yet-to-be-established EU reporting center. The encrypted communications of messenger platforms such as WhatsApp are also to be scanned. And the majority of EU member states are pushing for checks of audio messages as well. Draft laws, such as the Online Safety Bill in the UK and the Stop CSAM Act in the U.S., also aim to regulate online communications more strictly.

Experts, meanwhile, are warning of the threat of undermining encryption, encroaching mass surveillance and error-prone algorithms that can lead to false alarms. A study by the European Parliament’s Research Service deemed the plans to be unlawful and ineffective and warned that child sexual abuse material (“CSAM”) and grooming detection technologies would lead to more reporting and less accuracy, with a significant impact on law enforcement workloads. The draft law could “possibly even be counterproductive for child protection,” the justice ministers of Germany, Austria, Switzerland, Luxembourg and Liechtenstein wrote in a letter criticizing the plans. The legal service of the European Council also believes that the European Court of Justice would overturn the planned law. Negotiations, though, are still underway.

Any communication with a child can constitute cybergrooming

Tracking cybergrooming with software may sound like a good idea, says Jasmin, but it is still “nonsense.” As a victim, she knows how difficult and often even impossible it is to recognize cybergrooming. “Some sent dick pics right away to test the response, while others took one to three months to build trust,” she says. Cybergrooming, she adds, begins with normal questions, like, “How are you?” If the child replies that he or she isn’t doing so well, the cybergroomers will have an easy time, she says. Using harmless trick questions, it is easy to find out what school children go to and what grade they are in – and then wait for them in front of their school: “Boom, they’ve already got you. It happens very quickly,” warns Jasmin.

The profiles of the perpetrators differ just as much as their aims and methods, reports Thomas-Gabriel Rüdiger, head of the Institute for Cybercriminology at the University of Applied Sciences of the Brandenburg Police. “There are quick aggressive and sex-based initiations, there are acts of deception where perpetrators consistently pretend to be someone else, such as a young girl, and then there is the exertion of emotional influence.”

The criminologist believes that the aggressive “hypersexualized offenders” could be most readily detected using AI. But in the case of “intimacy perpetrators,” who sometimes spend a significant amount of time establishing influence over their victims, it will be “more difficult to distinguish normal communication from cybergrooming,” Rüdiger warns. “In principle, any communication with a child can legally be considered to be cybergrooming and recorded as such.”

Decoding the Language of Cybergroomers

A number of platforms already deploy various procedures against sexualized abuse. Image recognition techniques attempt to identify sexualized poses and depictions of abuse. To find images that have already been identified as sexualized, software like Microsoft’s PhotoDNA creates a digital fingerprint of an image (called a “hash”). This makes it possible to compare the images with entries in CSAM databases (CSAM stands for “child sexual abuse material”).

Cybergrooming, though, begins even before a photo is sent, and can also take place entirely without images. Simple text-based methods such as word filters are already in use on chat platforms like Knuddels or social networks like Instagram and TikTok. Their algorithms match messages or comments with blocklists full of keywords. However, they only recognize explicit terms. Child sex offenders quickly learn to circumvent such measures by, for example, changing the spelling or replacing letters with numbers.

At Swansea University in Wales, a team has developed a new machine-learning tool called DRAGON-SPOTTER. The team says that the tool combines AI and linguistics, allowing it to detect hidden meanings and intentions: When and how, exactly, does chat become sexually charged, even if no explicit sexual words appear? When might an offer to help (financial or emotional) be followed up with an attempt at sextortion, a form of blackmail in which the victim is threatened with the online publication of nude photos or videos? And when does seemingly innocent small talk pave the way for an addictive online relationship? The algorithm is designed to help police officers automatically identify groomers’ manipulative language tactics – “from emotionally isolating children to implicitly or explicitly conveying sexual intent to them.”

Among other source materials, 622 English-language chat transcripts from the U.S. initiative Perverted Justice served as the basis for the machine-learning model. Volunteers began participating in chats on MSN, AOL and Yahoo in 2003, using childish-sounding usernames like “sadlilgrrrl.” If a child sex offender took the bait, the volunteers tried to elicit as many details from him as possible. If a meeting could be arranged with the men, they were arrested – sometimes in front of the cameras of the U.S. television show “Catch a Predator.” The initiative shut down its controversial undercover operations in 2019, but the chats are still available online, and a research paper reveals how the DRAGON-SPOTTER team tried to use them to derive typical language profiles of offenders.

Using procedures of linguistic discourse analysis, researchers identified 70 recurring linguistic patterns in the chats – so-called “three-word collocations,” meaning combinations of three words that frequently appear adjacent to each other. From there, certain links to grooming goals, such as building trust, isolating the child or sexual gratification can be inferred. The combination of the words “wish,” “could” and “help,” for example, serves to build trust; the words “home,” “alone” and “weekend” grouped together can identify an attempt to establish direct, real-world contact with minors.

Less than half of such combinations contained sexually explicit words. Research leader Nuria Lorenzo-Dus writes in her book “Digital Grooming” (Oxford University Press 2023) that groomers instead often used romantic or friendly terms such as “love” or “like” – words that might also appear in innocuous communication.

The team also examined how compliments, threats or sexualized prompts are used. They explored how groomers assign specific roles to themselves and to the child. Nuria Lorenzo-Dus reports that groomers use language tactically, switching quickly between “nice” and “mean” speech. They might make suggestions before then taking on a lecturing tone, acquiesce and then criticize – with the ultimate goal of completely confusing the child. Groomers emphasize their skills, superiority and sexual knowledge. They portray sex as romantic or as a positive experience for the child. With their fake openness, they build closeness by, for example, sharing real or perceived emotional weaknesses such as fear or loneliness. In doing so, they tempt children to also be open and forthcoming and to show vulnerability. The bond is also strengthened by a demonstrative, pronounced interest in the child.

The developers of SPOTTER also incorporated chat data from real police cases. The tool scans chats for suspicious patterns and calculates an overall suspicion score, which is intended to indicate the likelihood of it being a cybergrooming conversation. A user interface allows police investigators to search suspects’ devices for messages and to download and analyze those messages. To speed up the evaluation of evidence, conversations can be sorted by participants, platforms or suspicion score.

It is still unclear how reliable SPOTTER is. The Swansea University team has not disclosed whether the accuracy of the tool has also been tested with chat data that didn’t involve a grooming attempt, or how high the error rate is. British and Australian police have reportedly successfully tested SPOTTER, and AlgorithmWatch sent questions to both, but did not receive a response. Swansea University declined an interview request after receiving a list of questions. The tool, which is currently only available in English, could be used in other countries in the future. For example, the Swansea University team sees potential in India. The online grooming chat data, though, would need to be linguistically analyzed in Indian English and other Indian languages in cooperation with local law enforcement agencies to adapt the tool to the local context.

Tolerating False Alarms

German computer scientist Nicolai Erbs has developed the deep-learning AI tool Privalino. “It is very difficult to know when cybergrooming is starting,” he says. Erbs argues that the context has to be included in the analysis. The team at the startup Kitext, which he co-founded, had analyzed thousands of chats from the Knuddels platform and other data sets to find typical indications of cybergrooming, he says. The system was tested with school classes, and the AI has been improved continuously. The fee-based child protection app WhatsSafe, into which Privalino is integrated, was designed to detect cybergrooming in WhatsApp chats – because child sexual offenders often switch to messenger apps after targeting children and teenagers in forums, social networks or games. For groomers, messenger apps have the advantage that there is no moderation and currently no automated checking of message content.

The system analyzed chat history and activated, for example, when contact data was requested or telephone numbers were sent, as well as when conversations were opened and a personal meeting was immediately suggested. “If someone has been texting for 10 weeks and then asks to meet, for example, the risk of cybergrooming is much lower than if happens during the initial contact,” Erbs says. It is also possible that it could be about a harmless meeting between two children. Erbs is aware that algorithms can make mistakes. “We wanted all problems to be identified – even at the risk of harmless situations being flagged as dubious,” he says. “We have to accept that there will be false alarms.”

WhatsSafe would send parents a warning by email for every message flagged as suspicious, with a traffic light system indicating the probability of cybergrooming. Today, Erbs believes that the first parents who purchased WhatsSafe were probably paying attention to what was happening on their children’s mobile phones anyway, and had potentially already spoken to them about cybergrooming – such that they likely didn’t need such an eye-catching color-coded system. Others, he says, were overwhelmed despite the warning system, with some parents not contacting the startup until a week after receiving the warnings and asking what they should do now. “Educating children about the problem is incredibly important and the best protection kids can get,” Erbs says. Kitext, the company behind Privalino, folded in 2019 due to a lack of customers.

In retrospect, Erbs thinks other approaches make more sense than text analyses, such as age verification or network analyses that can be used to automatically track down profiles that, for example, write to numerous young girls on social networks within a short period of time and receive few responses. He feels that automated reporting systems don’t belong in the hands of the police, because then they would be required to follow up on any evidence of cybergrooming, regardless of the age of those involved. “You could have two kids at the same school chatting with each, and you wouldn’t want to have the police showing up at school in response to such a minor case of cybergrooming,” Erbs says. Parents, though, he says, should be informed of the incident, so that schools can provide assistance in educating children about the issue.


The
criminalization of minors

In Germany, the principle of compulsory prosecution applies (Section 176a (1) no. 3; 176b (1) nos. 1 and 2 of the German Penal Code). This means the police have no leeway in deciding whether an investigation is appropriate or not. Should there be a suspicion of cybergrooming, police are required to open proceedings, even if the individuals involved do not wish to press charges. “Children and youth can be criminalized by AI,” warns criminologist Thomas-Gabriel Rüdiger. Minors themselves could find themselves targeted by police through comparatively harmless contact or sexting, the voluntary exchange of erotic communications and media. “If, for example, a 14-year-old is messaging a 13-year-old, it’s not comparable to an adult interacting with a child,” Rüdiger says. But the use of AI probably wouldn’t differentiate in such a situation because “the police are required to investigate the 14-year-old with the 13-year-old friend just as they would a real sex offender.” Only courts or public prosecutors can make a final decision on charges. Until that point, even children and adolescents who have not engaged in assaultive behavior are considered suspects.

According to an analysis (see the following overview) by the German Federal Criminal Police Office (BKA), around 45 percent of the 1,861 cybergrooming suspects in 2022 were children or youth. There are text and image forensic methods for seeking to determine the age of participants by evaluating text or photographs. But even here, false alarms cannot be ruled out.

External content from datawrapper.com

We'd like to present you content that is not hosted on our servers.

Once you provide your consent, the content will be loaded from external servers. Please understand that the third party may then process data from your browser. Additionally, information may be stored on your device, such as cookies. For further details on this, please refer directly to the third party.

Overwhelmed by automated checks

Already, the BKA and police are flooded with tips about sexualized violence targeting children. The BKA receives the bulk of its tips from the U.S. organization National Center for Missing and Exploited Children (NCMEC) – including explicitly sexualized images and video recordings, recordings and texts with sexualized references, and texts containing sexual innuendos (grooming). Most of the tips originate from platforms like Facebook, Twitch, Omegle and Google, which are legally required to report in the U.S. The BKA examines the extent to which the content is criminally relevant under German law and forwards it to the relevant police department. In 2022, the NCMEC forwarded around 136,000 tips pertaining to Germany, of which 90,000 cases were deemed relevant under criminal law, or around 66 percent.

Since the dissemination of depictions of abuse (Section 184b of the German Penal Code) was classified as a crime in Germany in 2021, the number of investigations has risen sharply. According to the BKA, though, it wasn’t just child sex offenders who were investigated in many cases, but also care providers, teachers and other supervisors who had only sought to secure evidence. “Since then, it has also been a requirement that cases in which parents found relevant material on their children’s mobile phones and forwarded it to other parents at the school for review or warning, as well as the mass dissemination of viral content due to digital naiveté, also be treated as crimes,” says a BKA spokeswoman.

There are now plans to relax certain aspects of the law. Justice ministers of the German states are calling for the definition of the offense in Section 184b of the German Criminal Code to be reduced to a misdemeanor (which in legal terms is a less serious offense that does not carry a high penalty of imprisonment or a fine) or for a provision for less serious cases. The BKA believes that a change in the law would be useful “in that it would allow the prioritization of investigations into producers of depictions of abuse or into child sex offenders.”

If the EU’s plans for Chat Control are implemented, the flood of tips will continue to grow. The BKA expects “an increasing volume of tips.” BKA President Holger Münch wants to expand the BKA into a “hub” for reports from Germany and abroad, as he emphasized at a press conference for the release of national criminal statistics for 2022 in May 2023, including automated processes for recording and sorting evidence. “We not only need to invest in technology – we also need the right staff to do so if we want to be able keep pace with the increasing numbers,” Münch said. The BKA president adds that his agency is seeking to combat low public awareness of investigations on the internet through campaign days and the placement of banners on websites that have been seized.

Online patrols against cybergrooming?

Criminologist Thomas-Gabriel Rüdiger believes it is important that law enforcement efforts be increased in the area of cybergrooming. “In some cases, the perpetrators and abusers have hundreds of victims, but very few of them file charges.” He believes that educating children about cybergrooming, introducing improved guidelines for moderation on previously unregulated gaming platforms and an expanded online police presence would be more effective than automated procedures. Rüdiger notes that in the past, police officers have posed as children in undercover online operations, which have led to convictions. “In my estimation, however, such operations don’t take place often, which explains why there is apparently very little fear of prosecution in this area.”

Rüdiger believes that the compulsion to prosecute may be contributing to this development and that it is no longer appropriate for online investigations. Even if cybergrooming-related messages in a forum are already years old, the police are still required to take action in response, he says. Such work, he says, prevents the hunt for serious crimes and further burdens the already overburdened authorities.

Denmark already has an online police patrol that operates on social networks using their own profiles, as well as in games like Fortnite and Minecraft. In doing so, it aims to “engage in dialog with children and young people, prevent inappropriate behavior and offenses, and intervene when offenses occur,” according to its website. It is not known how the BKA conducts its undercover operations and whether it uses its own online police patrols or its own AI procedures for analyzing cybergrooming. “The BKA does not provide information on matters of criminological measures as a matter of principle,” the agency said when contacted for comment.

Doing the analog homework

The “inconceivably high number of abuse images on the internet” also worries Rainer Rettinger, the managing director of the German Children’s Association, a children’s rights advocacy group. Nevertheless, he considers the EU plans to be a “massive encroachment on the principles of the rule of law.” He argues there is no guarantee that the technology will work flawlessly. “I see this as problematic, also as a door opener for other case scenarios, for which we also cannot allow child protection laws to be misused,” Rettinger says, critically. “The main thing we need to do is our analog homework.”

According to Rettinger, youth welfare offices need better equipment and twice as many specialists for effective child protection. He says that minors need to be offered protected hearings at youth welfare offices and family courts, with a specially trained procedural counsel. “Often, children’s statements aren’t believed, and they’re not even listened to in family courts,” the child protection expert warns. He says it is also important to clearly communicate children’s rights and child protection measures at all educational and care institutions, and teaching and care staff must be sensitized in the same way as minors.

Jasmin provides workshops at schools to educate others about the dangers of cybergrooming. She also sees a need for parents and schools to catch up. “It’s a huge problem that parents blame everything on teachers – while the teachers see it as the parents’ job.” She says they need to be able to address cybergrooming dangers and recognize signs of abuse even without words. “The offenders put you under pressure, they forbid you from talking and the threat situation is acute.” Jasmin knows this from her own experience. On her Instagram page @Das_Schweigen_brechen (breaking the silence), she shares her experiences and is a point of contact young people who are being targeted, for teachers and for interested police departments.

Any child can be the target of cybergrooming, Jasmin warns – even if parents talk to the child and monitor their mobile phones or if algorithms scan their chats. “There is no such thing as 100 percent safety,” she says. “People shouldn’t rely completely on any technological solution.”

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.

Sonja Peteranderl

Former Fellow Algorithmic Accountability Reporting

Sonja is a journalist and the founder of BuzzingCities Lab, a think tank focusing on urban violence and technology. She was an editor for DER SPIEGEL and WIRED Germany and a freelance foreign correspondent. During her fellowship, she investigated algorithmic systems in policing/security, the impact of AI on the visibility of marginalized communities, and the role of automated systems in the context of gender-based violence.