The UK's media regulator, Ofcom, is facing mounting pressure to accelerate its response to a disturbing online trend involving artificial intelligence. The controversy centres on Grok, a chatbot owned by Elon Musk's xAI, which users have reportedly prompted to generate manipulated images of women and girls wearing bikinis or in states of undress.
Outcry Over AI-Generated Imagery
Science and Technology Secretary Liz Kendall has publicly condemned the proliferation of these digitally altered pictures, labelling them "unacceptable in decent society." The images, some of which are overtly sexualised or violent, have sparked international outrage. The situation is further aggravated by evidence from the charity the Internet Watch Foundation, which indicates that Grok Imagine, an AI tool for generating images and videos, has been used to create illegal child sexual abuse material.
Despite assurances from X, the platform also owned by Musk, that such material is removed, critics argue there is a glaring lack of effective safeguards. This is particularly true for content that, while cruel and violating, may not technically breach existing laws.
Regulatory Gaps and Industry Resistance
Ofcom is currently assessing whether to launch a formal investigation. Observers argue that for the regulator to maintain public trust, it must demonstrate far greater urgency and transparency about its planned steps. The UK's Online Safety Act treats service disruption as a last resort, requiring a lengthy process before a site can be blocked—a procedure that platforms reluctant to cooperate can easily drag out.
A parallel concern is the enforcement of existing rules. Fines levied against pornography websites for failing to implement age verification standards have, so far, gone unpaid, raising questions about regulatory teeth. This context makes the response to the Grok controversy a critical test case.
Calls for Immediate Legislative Action
The incident has ignited a debate about the pace of technological change versus the speed of lawmaking. While long-term solutions, such as Denmark's proposal to grant people copyright over their own likeness, are being discussed, there is a consensus that immediate action is required.
Experts are urging the government to close legal gaps related to chatbots now, rather than waiting for a comprehensive AI bill that could take years to enact. Proposals include amending the Police and Crime Bill and adopting a broader legislative approach to sexual offences, as suggested by Professor Clare McGlynn, to address emerging threats systematically.
The core demand from campaigners and politicians is clear: regulators and ministers must prioritise the safety and wellbeing of users, particularly women and children, over the interests of powerful tech platforms. The rules governing the online world must be both democratically agreed and robustly enforced.