Sexualized images on X: What we are doing to stop them and what we expect from the EU

X’s Grok chatbot is the focus of yet another scandal after generating pictures of real people in bikinis, without their consent, including children. But the problem of AI-generated sexual images without consent on X goes much further than Grok — and X blocked our research to address the problem. The EU Commission needs to step up their game to protect people from this kind of violence.

The collage shows 4 archival images of women. In these of the images, the women are nude. There is also one portrait of a woman with yellow shapes and bounding boxes on her face.
Oliver Marsh
Head of Tech Research

“Have you ever wanted to see your neighbor or a girl you know without clothes?”

This is the text of a post, translated from Russian, of a publicly available account on X dedicated to promoting non-consensual sexualization tools (NSTs), often called “nudify apps.” Networks of such accounts have been reported on by outlets including The Guardian, Bellingcat, and Indicator, including extensive criticism of X for hosting such networks. As part of our research into NSTs on large online platforms, we have seen X accounts that offer nudification services, accounts that compile and rank NSTs, accounts that run competitions to get credits for NSTs, and other uses of X to visibly spread these tools. Many have hundreds of followers and names that explicitly reference terms like “Nudify” or “Clothes Off.” This should make them very easy to detect and remove, if X wanted to. And yet, such posts and accounts are still on X. We reported the post mentioned above and were told it does not violate X’s policies. The issue of non-consensual sexualization on X is far from limited to just the Grok chatbot.

This is why it is crucial that watchdog organizations like AlgorithmWatch can find and flag such content. But X has actively blocked us from doing such work.

The rollout in recent years of general-purpose generative AI tools from companies like OpenAI has made it relatively simple to develop NSTs. These services can easily be found in various dark corners of the internet, including on Telegram, Discord, and similar platforms. Tips for getting general-purpose chatbots to produce non-consensual images can be found on Reddit. But their circulation on very large platforms such as X, Facebook, and Instagram — including by monetized advertising — helps to spread them to wider audiences.

At AlgorithmWatch we have been building a system to help detect NSTs on large platforms, including via crowdsourcing observations of such tools. We have been using opportunities presented by the EU’s Digital Services Act (DSA). This regulation makes online platforms perform risk assessments and provide data to researchers to protect against systemic risks, ranging from threats to fundamental rights to gender-based violence. Sexualization without consent should be a clear case to be addressed under these rules.

To build our detection system, we planned to use data from Meta platforms, Apple’s and Google’s app stores, and X. All these have previously been found to host content promoting NSTs. And all are covered by DSA rules, which say they must, on request, provide data to public interest researchers who meet a series of conditions (which we do). We made use of these rules and experienced a mixed picture. X, unsurprisingly, was by far the worst.

In June 2025 we requested data under Article 40.12 of the DSA. X refused, saying, “Your application fails to demonstrate that your proposed use of X data is related to the specified systemic risks in the EU as described by Art. 34 of the Digital Services Act.” They have used this exact text to reject many other requests (as found by the DSA Data Access Collaboratory), so it seems to be their default refusal. We complained to X about this via their online form in July and later followed up with personal emails to relevant staff members, but received no answer.

By contrast, Apple’s and Google’s app stores made access to data relatively straightforward, and tests so far suggest that really obvious NSTs are hard to find. Accessing Meta’s data via their official tool requires agreeing to a series of burdensome rules, some of which actively make it difficult to report violative content. They do make some basic efforts to address clear problems; searches for terms like “nudify” are blocked, for example, and they are suing one provider for advertising NSTs on their platform. However, even after this, research from Indicator Media shows that the problem is still rife.

At the end of 2025, the European Commission announced a 120 million euro fine on X under the DSA for offenses including “failure to provide researchers access to public data.” This is a positive step but also a somewhat hesitant one, after such flagrant and long-running violations. After the latest scandal, X has blamed users for their prompting behavior — not their own failure of safeguards. They will also probably tweak the Grok chatbot to avoid the scandal going further, as they did the last time this happened. None of these address the real issue. The problem of non-consensual nudity is rampant on X. So far, they have done almost nothing to address it. It is our role as a civil society watchdog to detect and reveal such transgressions. But the EU Commission needs to step up their game to protect people from this kind of violence.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

I agree to receive this newsletter and know that I can easily unsubscribe at any time.

For more detailed information, please refer to our privacy policy.