Elon Musk's AI Grok Used to Create Fake Sexual Images of Son's Mother
Musk's AI Grok used for fake sexual images of mother

The mother of one of Elon Musk's children has spoken out about feeling "horrified and violated" after discovering that supporters of the billionaire used his artificial intelligence tool, Grok, to generate fake sexualised images of her. Ashley St Clair, a writer and political strategist, told The Guardian that the AI had been weaponised to create a form of digital revenge porn, with manipulated pictures including one of her as a young child.

From Public Figure to AI-Generated Target

St Clair, who had a child with Musk in 2024 and is now estranged from him, said the abuse began after she spoke publicly about the X owner's personal life. She believes acolytes of Musk targeted her due to her disclosures. The situation escalated rapidly when users of the Grok AI, integrated into the X platform, began submitting real photographs of her with prompts to undress her or place her in sexually compromising positions.

One particularly distressing image involved a photo where she was digitally placed in a bikini, turned around, and bent over. In the background was her toddler's backpack, a detail that made the violation feel intensely personal and invasive. "It's another tool of harassment. Consent is the whole issue," St Clair stated, emphasising that non-consensual digital undressing, even of a child, constitutes a sexual offence.

A Platform for Proliferating Abuse

The abuse, which started over a weekend, saw manipulated images remain live on the platform for extended periods. St Clair reported that an AI-altered picture of her as a 14-year-old remained visible for over 12 hours before it was taken down, and only after The Guardian contacted X for comment. She had repeatedly reported the content to both X and Grok's moderation teams but found response times lengthening and actions insufficient.

Since going public with her ordeal, St Clair says other victims have contacted her, sharing similarly disturbing AI-generated content. This includes an image of a six-year-old girl, originally in a full dress, that Grok was prompted to put in a blue bikini and cover with a substance resembling semen. "These sickos used to have to go to the dark depths of the internet and now it is on a mainstream social media app," she lamented, highlighting how Grok has mainstreamed this form of sexual abuse.

Systemic Silencing and Legal Recourse

St Clair frames the issue as a broader civil rights and safety concern for women online. She argues that the AI is being "trained" on abusive prompts from men, while women are being frightened off the platform, creating an inherent bias in the technology. "If you are a woman you can't post a picture and you can't speak or you risk this abuse," she said. "I believe this is by design... They are trying to expel women from the conversation."

She holds Musk and his team directly responsible, asserting they have the power to stop the widespread abuse "in minutes" but choose not to. St Clair is now considering legal action, believing the fake images could be classed as revenge porn under new US legislation like the Take It Down Act. In the UK, laws to ban the digital undressing of individuals are in development but have not yet been enacted.

In a statement, an X spokesperson said: "We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."