Ofcom Investigates X's AI Tool Grok Amid Government Use
Ofcom probes X's AI Grok as ministers use platform

The UK communications regulator, Ofcom, has officially opened a formal investigation into the social media platform X, focusing on its artificial intelligence chatbot named Grok. This probe raises significant questions about accountability and content governance on a digital service that is extensively utilised by UK government ministers and departments for official communications.

Scrutiny for Elon Musk's AI Creation

The investigation centres on Grok, an AI tool developed by X's owner, Elon Musk. Ofcom's move signals growing regulatory concern over how advanced AI systems are deployed on major online platforms, particularly those integral to public discourse and official information dissemination. The regulator will examine whether X has complied with its statutory duties concerning content generated by this AI.

The timing and focus of the investigation are crucial, given that numerous government bodies and senior politicians actively use X to announce policies, engage with citizens, and share updates. This dual role of the platform—as both a public square and a host of experimental AI—creates a complex regulatory challenge.

Political Shake-up: Zahawi Joins Reform UK

In a separate but notable political development, former Conservative chancellor Nadhim Zahawi has defected to the rival party Reform UK. This move is analysed for its potential to both bolster and complicate the party's campaign. Zahawi's high-profile switch grants Reform UK increased media attention and political credibility.

However, political commentators Pippa Crerar and Kiran Stacey suggest the defection could also present challenges. It may intensify internal debates over policy direction and public perception, as Reform UK seeks to position itself as a distinct alternative to the traditional Conservative vote.

Broader Implications for Platform Governance

The Ofcom investigation into Grok is part of a wider landscape of increasing scrutiny for major tech platforms operating in the UK. Regulators are now actively testing their powers under new online safety frameworks, setting precedents for how AI-driven content will be monitored and controlled.

The outcome of this probe could establish important guidelines for the integration of generative AI tools on social media, especially on platforms that serve critical functions for democracy and public administration. The fact that government entities rely on X adds a layer of urgency to Ofcom's assessment of potential risks associated with AI-generated content.

Listeners and readers are encouraged to submit questions for Guardian journalists Pippa Crerar, Kiran Stacey, and John Harris to the email address politicsweeklyuk@theguardian.com.