Ofcom Probes X Over Grok AI Deepfakes, Threatens £18m Fine
Ofcom investigates X over Grok AI deepfake images

The UK's media and online safety regulator, Ofcom, has formally opened an investigation into Elon Musk's social media platform, X. The probe will assess whether the site has taken sufficient steps to shield British users from explicit deepfake images created by its own Grok AI tool.

Regulator Targets AI-Generated Abuse

Ofcom stated it was acting on "deeply concerning reports" that the Grok AI chatbot on X had been utilised to produce and disseminate fake nude photographs of individuals. The watchdog clarified that these alleged breaches could constitute intimate image abuse or pornography, and may include sexualised depictions of children, potentially amounting to child sexual abuse material.

The regulator confirmed it first contacted X on Monday of last week, giving the company a deadline of Friday 9th January 2026 to respond. Should X be found in violation of UK media law, Ofcom holds the power to levy a substantial financial penalty. This could be as much as 10% of the firm's global revenue or £18 million, whichever figure is greater.

Transatlantic Tensions Escalate

This regulatory move intensifies an ongoing war of words concerning the platform, formerly known as Twitter. Elon Musk has recently ramped up his criticism of the UK's Labour government, questioning, "Why is the UK government so fascist?" He has accused authorities of seeking "any excuse for censorship."

Meanwhile, from across the Atlantic, Sarah Rogers, a former Trump administration official from the US State Department, has cautioned the UK against considering a potential ban on X. She controversially claimed the British government was "contemplating a Russia-style X ban to protect [women] from bikini images."

A Critical Test for Online Safety Laws

The emergence of AI-powered image manipulation directly within X presents a significant challenge to the UK's Online Safety Act. The legislation's framework, primarily designed for user-posted content, is now being strained by a deluge of material generated by the platform's own software functions.

While it is already illegal to create and share explicit imagery of a person without their consent, current regulations do not provide a clear-cut answer for scenarios where the content is produced by an integrated AI feature. This investigation will likely set a crucial precedent for how generative AI is governed under British online safety law.