UK Moves to Criminalise AI 'Nudification' as Grok Feature Sparks Outrage
UK criminalises AI 'nudification' after Grok backlash

In a stark response to a wave of AI-facilitated sexual harassment, the UK government has announced it will criminalise the creation of nonconsensual intimate imagery. This move comes after months of documented abuse centred on the image-generation feature of X's AI chatbot, Grok, which users have exploited to digitally strip clothes from images of women and children.

A Wave of AI-Facilitated Abuse

The scale of the problem was highlighted by AI governance expert Nana Nwachukwu, who documented 565 instances of users requesting Grok to create nonconsensual intimate imagery between June 2025 and January 2026. Shockingly, 389 of these requests were made in a single day. The process is alarmingly simple: a user posts a regular text reply tagging Grok with a request, and the AI generates and publishes the abusive image to a vast audience.

Following a significant public backlash, X announced last Friday that it would restrict Grok's image generation to paying subscribers. Reports suggest the bot now refuses prompts to generate images of women in bikinis, though it may still comply with similar requests concerning men.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Government Action Deemed 'Not Enough'

Technology Secretary Liz Kendall swiftly condemned X's response as insufficient. "This does not go anywhere near far enough," she stated, announcing that creating nonconsensual intimate images would become a criminal offence this week, with the supply of dedicated nudification apps also being outlawed.

Critics argue that X's decision to place the feature behind a paywall merely allows the platform to profit more directly from the harassment. Furthermore, stopping specific prompts only after public outcry raises fundamental questions about why such capabilities were ever permitted.

Shadow Technology Secretary Julia Lopez suggested the government was overreacting, calling it a "modern-day iteration of an old problem" comparable to crude drawings or Photoshop. However, experts counter that the scale, accessibility, and speed of AI-generated abuse are fundamentally different, requiring a new regulatory approach.

The Limits of Reactive Regulation

While criminalising users and tool suppliers is a step forward, Nwachukwu argues it misses the core issue. Grok and similar tools are not dedicated nudification apps; they are general-purpose AI systems with weak safeguards. Kendall's proposed law does not mandate platforms to implement proactive detection, instead waiting for harm to occur before punishment.

This reactive model has clear drawbacks. Harmful images generated before the backlash persist, potentially saved and shared across other platforms. For victims, regulation after the fact offers little solace. "For harm that is structurally amplified in this manner, the approach must be preventive, not reactionary," Nwachukwu asserts.

A Transnational Challenge

A more fundamental obstacle is the lack of international alignment on AI safety. While the UK pushes for regulation, the US under the Trump administration is pursuing a "minimally burdensome" policy framework to enhance its global AI dominance. This provides little incentive for American companies like X, OpenAI, or Anthropic to prioritise safety.

"Kendall can criminalise users in the UK, she can threaten to ban X entirely," Nwachukwu notes. "But she cannot stop Grok from being programmed in San Francisco." This highlights the critical need for cross-border collaboration, as national laws struggle to regulate a transnational technology.

As debates continue, victims and women online are left questioning their safety and participation on global platforms. The central demand from experts is a paradigm shift in regulation: from requiring companies to "remove harm when you find it" to legally obligating them to "prove that your system prevents harm." This would involve mandatory input filtering, independent audits, and licensing conditions that embed safety into the technical design, potentially stopping abuse before it ever materialises.

Pickt after-article banner — collaborative shopping lists app with family illustration