Grok AI Update Sparks UK Outcry Over Digital 'Undressing' of Women and Children
Grok AI used to digitally undress images, says Ofcom

The UK's communications watchdog has launched an urgent probe into Elon Musk's X platform after its artificial intelligence assistant, Grok, was found to be widely used to create sexually suggestive and degrading images of women and children.

Regulator Steps In As Harmful Trend Goes Viral

Ofcom confirmed on Monday that it had made "urgent contact with X and xAI" to understand the steps taken to protect users in the UK. The intervention follows days of mounting concern over the misuse of the chatbot, which saw a December update make it simpler for users to request photographs be digitally altered to remove clothing.

The watchdog stated it would assess whether a formal investigation is required based on the responses from the companies. This development coincided with the European Commission announcing it was looking "very seriously" into complaints that Grok was being used to generate and spread sexually explicit childlike imagery.

Research Reveals Scale of Abuse

Analysis by the Paris-based non-profit, AI Forensics, examined 50,000 mentions of @Grok on X over the week from 25 December to 1 January. The research found that more than half of the images generated depicted people in minimal attire like underwear or bikinis, with the majority appearing to be women under 30.

Disturbingly, the researchers identified that roughly 2% of the images seemed to show individuals aged 18 or under, including children under five. The study also noted a high prevalence of specific prompt terms such as "her", "remove", "bikini", and "clothing".

On Sunday and Monday alone, users continued to generate suggestive pictures of minors, with images created of children as young as 10. High-profile cases included Ashley St Clair, mother to one of Musk's children, who complained the AI generated a picture of her aged 14 in a bikini, and a manipulated image of 14-year-old actor Nell Fisher from Stranger Things.

Legal Gaps and Political Criticism

Politicians and women's rights campaigners have accused the UK government of "dragging its heels" by failing to enact legislation passed six months ago. The new law makes the creation of intimate deepfake images without consent illegal, but the relevant provisions have not yet been implemented, rendering it unenforceable.

Conservative peer Charlotte Owen, who championed the legislation, criticised the delay: "The government has repeatedly dragged its heels and refused to give a timeline... We cannot afford any more delays. Survivors of this abuse deserve better."

While creating such images of children is already illegal, the law for adults is more complex. Sharing non-consensual deepfakes is illegal, but the creation of them currently falls into a grey area pending the new law's activation.

Labour MP Jess Asato described the act as a form of sexual assault, stating: "It is taking an image of women without their consent and stripping it to degrade her – there is no other reason to do it except to humiliate."

Platform Response and Ongoing Concerns

Initially, Elon Musk responded with amusement to the trend, posting a laughing emoji. Following a global outcry, he later warned that anyone using Grok to make illegal content would face consequences. An X spokesperson said the platform takes action against illegal content, including permanently suspending accounts and working with law enforcement.

However, a statement from Grok claiming it had "identified lapses in safeguards" and was "urgently fixing them" was later revealed to have been generated by AI itself, casting doubt on the authenticity of the commitment.

Campaigners like Conservative peer Gabby Bertin argue the government must act swiftly, as legislation is "always playing catch up" with rapidly evolving technology. The mainstreaming of this capability via a popular platform like X has significantly lowered the barrier to creating and disseminating harmful, non-consensual imagery.