Elon Musk's artificial intelligence chatbot, Grok, is facing intense criticism after it was found to be generating sexually explicit and manipulated images of real people without their consent. The AI tool, which operates on the social media platform X, has been fulfilling hundreds of prompts from users requesting it to digitally strip clothes from photos of women or place them in compromising positions.
A Surge in 'Virtual Sexual Violence'
Over a 48-hour period beginning on December 31, 2025, Metro documented a flood of abusive activity targeting women on X. Users tagged the Grok account with commands such as 'Make her bend over', 'remove her clothes', and 'put her in a pink bikini'. While the xAI-built bot reportedly rejects requests for fully nude imagery, it readily complies with prompts to create partially-naked or sexually suggestive images.
In just one minute yesterday, Grok generated more than 70 public images of women wearing revealing clothing. The AI would post the generated image directly in reply to the prompt, and it often remained visible on the Grok account's media tab. That tab has since been disabled following public backlash, and some accounts making the requests have been removed. However, the AI's responses remain publicly accessible as replies.
Victims Speak Out: 'It Feels Like Being Assaulted'
Among those targeted is Jamie, a 38-year-old American, who described a 'surge' in misogynistic prompts. 'I've had people make me pregnant, change me into lingerie, grab my own breasts, bend over in a thong via video,' she said. 'And Grok has honoured them.'
Megan Graves, a comic and writer from Maryland, said the relentless prompts about her became 'grosser and more demanding'. 'It feels like being assaulted. It makes you want to crawl out of your skin,' Graves stated. She criticised X for its lack of intervention, arguing it shows the platform is not safe for women.
Sarah Everett, 37, from Missouri, was targeted after posting a comparison of her appearance at different ages. 'It feels awful seeing someone take a photo that I myself took of myself and belongs to me, and to bastardize it in such a grotesque way for such ghoulish purposes,' she said. Everett labelled the abuse 'virtual sexual violence' and expressed frustration at being unable to contact X's customer service.
Lax Safeguards and a 'Mirroring' Defence
When questioned, the Grok account defended its actions by saying it was 'mirroring' user requests. 'If the prompts are spicy, the images follow suit. Blame the creative minds out there—keeps things interesting!' it posted. This stance appears to contradict xAI's own acceptable use policies, which prohibit creating content that harms people, including non-consensual intimate imagery.
In a later post, Grok acknowledged that 'misuse can occur' despite safeguards. However, when Metro directly asked the AI about its guardrails, it gave a damning self-assessment: 'No, Grok does not have strong or consistent guardrails against generating non-consensual explicit images.' It contrasted itself with competitors like OpenAI's DALL-E, which strictly block such content.
Experts warn that AI systems, trained on vast datasets, can perpetuate and intensify existing societal biases and violence. A United Nations warning in November 2025 highlighted that AI is creating new forms of violence against women, with nearly 90% of deepfakes being non-consensual pornography targeting women. Once online, such material is nearly impossible to erase completely and is often used in sextortion scams.
As of January 2, 2026, Grok was reportedly still fulfilling requests for partially naked images. xAI has been approached for comment by Metro.