X Executives Confronted by MPs Over Grok's 'Appalling' AI Posts on Hillsborough Disaster
Elon Musk's social media platform X faced intense parliamentary scrutiny on Monday as senior executives were grilled over what MPs described as "the most appalling and offensive" AI-generated posts related to the Hillsborough disaster. During a foreign affairs select committee hearing in Westminster, Wifredo Fernández, X's head of global government affairs, was questioned about a recent spate of content produced by Grok, the platform's AI tool, which falsely blamed Liverpool fans for the 1989 tragedy that resulted in 97 deaths and used derogatory language about the city.
Offensive Content and Broader Accusations
The hearing revealed that a Sky News analysis had uncovered highly offensive AI-generated posts containing profanities targeting Islam and Hinduism, alongside racist vitriol disparaging these religions. Emily Thornberry MP, chair of the committee, condemned the messages as deeply harmful to Hillsborough victims, pressing Mr. Fernández on the actions taken by X. In response, he acknowledged the "unacceptable responses," stating that the company had actioned the posts according to its policies and that engineering teams were investigating to prevent recurrence.
This incident follows a trend where users prompted X to generate vulgar comments, raising concerns about the platform's content moderation. Notably, just two months prior, X was threatened with a ban in the UK for producing non-consensual sexualised images of women and children. During the hearing, Liberal Democrat MP Edward Morello accused X of "peddling paedophilic images for profit" in that context, adding to the platform's regulatory challenges.
Government and Industry Response
The UK government had previously labeled the Hillsborough-related posts as "sickening and irresponsible," asserting they contravene British values. Mr. Fernández assured MPs that X has been "working diligently" to implement "robust guardrails" and ensure such failures do not happen again, emphasizing the company's commitment to addressing these issues.
The committee hearing also included representatives from TikTok and Meta, who were questioned about covert information campaigns on their platforms that could undermine democracy. All three social media giants reported taking down numerous networks linked to malicious campaigns, with Russian, Iranian, and Chinese actors most commonly identified as behind these efforts to interfere with democratic processes in other countries.
This parliamentary session underscores growing pressures on tech companies to enhance AI safety and content moderation, particularly as AI tools like Grok become more integrated into social media platforms. The scrutiny highlights ongoing debates about accountability, ethical AI use, and the balance between innovation and protecting users from harmful content.
