Elon Musk's X Blocks Free Grok AI After Child Abuse Image Scandal
Musk's X blocks free Grok AI over child abuse images

Elon Musk's social media platform X has blocked users who do not pay for its premium subscription from accessing the image-generation feature of its Grok artificial intelligence tool. The move, implemented on Friday, comes amid a growing scandal where the AI was exploited to create non-consensual, sexualised deepfake images of women and children, material that child protection groups classify as child sexual abuse material (CSAM).

A Tool for Digital Abuse Snowballs Over Christmas

The controversy erupted when it was revealed that X users had been systematically using Grok to digitally undress photos of real women and minors. Research indicated the trend gained horrific momentum over the Christmas period. Content analysis firm Copyleaks reported that by 31 December, users were generating roughly one non-consensual sexualised image per minute on the platform.

PhD researcher Nana Nwachukwu at Trinity College Dublin found that nearly three-quarters of analysed posts contained requests for these illicit images of real individuals. The UK's Internet Watch Foundation confirmed its analysts had discovered criminal imagery of children aged 11 to 13 that appeared to be created using Grok. The abuse often took a communal, grotesque form, with users coaching each other on effective prompts and sharing the results publicly in replies to the victims' original, innocent posts.

Political Outcry and Musk's Defensive Response

The scandal prompted fierce condemnation from UK politicians. Technology Secretary Liz Kendall stated the government "cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." Deputy Prime Minister David Lammy revealed that even US Vice-President JD Vance agreed the situation was "entirely unacceptable." Ministers suggested a potential ban on X was being considered.

Elon Musk initially responded with a glib dismissal, with his company xAI replying "Legacy Media Lies" to press inquiries. He later claimed that anyone using Grok to make illegal content would face consequences. However, his primary action was to restrict the image-generation tool to paying subscribers only—a move a Downing Street spokesperson criticised as simply "turn[ing] an AI feature that allows the creation of unlawful images into a premium service." Users reported the separate Grok app continued to allow the generation of such imagery.

The Regulatory Challenge: Can Law Keep Pace with AI?

The crisis highlights the immense difficulty regulators face in controlling fast-evolving AI technology. Under the UK's Online Safety Act, regulator Ofcom has the power to block websites or impose fines of up to 10% of global turnover. Prime Minister Keir Starmer has pledged Ofcom has the government's "full support to take action."

However, as Guardian global technology editor Dan Milmo explained, the speed of technological abuse is outstripping the legislative process. At the time of reporting, Ofcom had not even announced a formal investigation, while the damage to victims was immediate and ongoing. The incident has exposed tensions in the Online Safety Act, criticised by some for impinging on free speech and by child safety campaigners for not being implemented quickly or rigorously enough.

Internationally, Indonesia took swift action by blocking access to Grok entirely on Saturday. In the UK, ministers promised new laws in December to ban "nudification" tools, though their timeline remains unclear.

The Grok scandal represents a convergence of some of the internet's most troubling failures: powerful AI released with inadequate safeguards, social media algorithms that can reward cruelty, and a regulatory framework struggling to match the velocity of digital harm. For the women and children targeted, the violation is profound and real. The central question now is whether those in power can and will act with the urgency this crisis demands.