Grok AI Image Scandal: My Photos Were Sexualised Online Without Consent
Grok AI scandal: My photos were sexualised online

In a chilling account of digital violation, journalist Grok Sharan Dhaliwal has revealed how her ordinary social media photos were stolen and plastered across pornographic websites, and how she now fears the widespread threat posed by X's controversial Grok AI image generator.

A Shocking Discovery in 2021

The ordeal began in 2021, when a follower on the platform then known as Twitter sent Dhaliwal a disturbing direct message. The message stated: 'I don't want this to come across as weird, but I found your photos on a porn site and I'm not sure you know about it.' Initially disbelieving, Dhaliwal requested a link and was horrified to find a gallery on a major adult site filled with images taken directly from her Instagram account.

All the pictures showed her fully clothed in mundane poses, yet they were accompanied by a barrage of sexualised and racist comments. One comment under the first image read: 'I would love to r*** that p***'. She discovered that a feature on the site allowed any user to upload galleries, and an anonymous individual had chosen to misuse her personal photos.

'I had become sexualised without my knowledge or consent. I was disgusted and felt violated,' Dhaliwal recounted. 'A feeling of dread filled me, and my anxiety spiked.'

The New Frontier of Abuse: Grok AI on X

Dhaliwal draws a direct parallel between that past violation and the current crisis unfolding on X, the platform rebranded by owner Elon Musk. She expresses profound alarm at the capabilities of X's AI tool, Grok, which has sparked international outrage for generating non-consensual, sexually explicit 'deepfake' images.

The tool has been used to strip the clothing from images of women, men, and even children, based on user prompts. Dhaliwal, who left X in April last year, describes the platform as having become a 'cesspool of racism, far-right misinformation and the kind of bad AI and offensive caption combination' she associates with content from a 'dodgy uncle'.

She states that had she remained on X, it would have become just another venue for people to misuse her image, but with the terrifying new capability to digitally undress her using AI. 'I urge anyone still on the site to join me in quitting,' she writes, 'and I urge the government to really consider the nuclear option of banning X in the UK.'

Global Repercussions and a Call to Action

The scandal has already triggered significant international response. Grok has been officially banned in Malaysia and Indonesia, and UK media regulator Ofcom has announced an investigation into the safety concerns posed by X. Multiple individuals, predominantly women, have reported being targeted by users who sexualise them using Grok.

Dhaliwal explains the particularly agonising mechanism: users create these images by replying to a person's original post and tagging Grok with a prompt, meaning the resulting sexualised image appears directly in the victim's timeline. 'You have to watch while someone takes off your clothes – and for many of these sick individuals that's clearly part of the thrill,' she observes.

Despite quitting X, a recent fear-driven search via a friend's account found no images of her generated by Grok, but she acknowledges this offers little comfort. She highlights the most sickening aspect: the tool is being used to create explicit imagery of children.

Dhaliwal condemns Elon Musk's response, which framed criticism as a desire for 'censorship' that suppresses 'free speech' – a point Musk illustrated by posting an AI-generated image of Prime Minister Keir Starmer in a bikini. 'We should not be trying to save a platform that is owned by a mewling manchild who doesn't understand the importance of reporting and banning illegal pedophilic or sexualised material,' she argues.

Concluding that X is 'beyond reform, beyond redemption', Grok Sharan Dhaliwal's powerful testimony stands as a stark warning about the unchecked dangers of AI-powered harassment and a fervent plea for decisive regulatory action to protect millions still at risk.