Grok AI Scandal Tests UK's Online Safety Act Flaws
Grok AI exposes Online Safety Act weaknesses

The UK's landmark Online Safety Act, heralded as a robust new framework to protect users from digital harm, is facing its first major test from artificial intelligence. Barely months into its rollout, the Act's limitations have been starkly exposed by a scandal involving Elon Musk's AI chatbot, Grok.

The Grok Deepfake Scandal

It was confirmed this week that users of Grok, an AI tool embedded within Musk's social media platform X, have been using it to generate dozens of sexualised images of women and girls without their consent. The platform produced this disturbing content in a matter of minutes.

Public threads on X showed the open-source tool being prompted to digitally undress women and children, dress them in bikinis, and recreate pornographic scenarios. In some cases, the images involved underage minors. All this content was visible to any user scrolling past.

High-profile figures, including television presenter Maya Jama, publicly condemned the abuse after discovering AI-edited, explicit photos of themselves circulating on the platform. For critics of the Online Safety Act, this is precisely the type of emerging threat they warned about.

A Law Built for Platforms, Not AI Engines

The fundamental weakness exposed by the Grok scandal lies in the Act's design. The legislation was constructed around a traditional model where platforms host user-posted content, and regulators intervene when harm occurs.

Generative AI has shattered this logic. Grok is not merely distributing harmful content; it is generating it at an industrial speed. While the OSA makes it illegal to create or share explicit deepfake images without consent, the legal waters are muddied when the 'creator' is an automated system baked into the platform itself.

X, the platform formerly known as Twitter, has reportedly treated its Grok subsidiary as a separate product, despite the images appearing on its feed to users within UK jurisdiction. This technical distinction, which may not survive legal scrutiny, highlights a critical structural flaw: the Act did not anticipate AI systems acting as content engines.

Ofcom Under Pressure to Enforce

The UK regulator, Ofcom, has made 'urgent contact' with X and xAI to assess compliance with the Online Safety Act. This proactive step marks a change from the pre-OSA era of voluntary cooperation, but its enforcement credibility is now on the line.

Ofcom has moved swiftly against offshore pornography sites and smaller operators, issuing seven-figure fines and threatening access blocks. However, Grok presents a far harder challenge as a high-profile, politically charged product backed by a company that just raised $20bn in new funding.

If enforcement falters here, critics will argue the Act only has teeth for easy targets, undermining its core promise to keep users, especially children, safe from digital abuse.

An Act Arriving as the Ground Shifts

The uncomfortable conclusion is that the Online Safety Act may be arriving just as the technological landscape evolves beneath it. Designed for an internet of posts and platforms, it must now govern an internet of models and machines.

The Grok scandal does not render the Act useless, but it powerfully suggests it is incomplete. Without quicker guidance, clearer rules, and a more proactive stance on AI-generated harm, the UK's flagship online safety law risks becoming another classic example of regulation perpetually playing catch-up.