A Guardian investigation has uncovered a disturbing global trend, with millions of individuals using the secure messaging app Telegram to create and share deepfake nude images, driven by the proliferation of advanced artificial intelligence tools. This phenomenon is industrialising the online abuse of women, as AI makes it easier than ever to generate non-consensual graphic content.
Widespread Channels and Global Reach
The analysis identified at least 150 Telegram channels, which are large encrypted group chats popular for their privacy features, that appear to have users spanning multiple countries. These include the UK, Brazil, China, Nigeria, Russia, and India. Some channels offer services where users can pay to have photos of any woman transformed into videos depicting sexual acts through AI technology.
Many more channels provide a continuous feed of AI-generated images, targeting celebrities, social media influencers, and ordinary women, making them appear nude or engaged in sexual activities. Followers also use these platforms to exchange tips on available deepfake tools, fostering a community around this abusive practice.
Examples from Different Regions
On a Russian-language Telegram channel promoting deepfake "blogger leaks" and "celebrity leaks," a post advertised an AI nudification bot with the slogan, "a neural network that doesn't know the word 'no'." It encouraged users to "choose positions, shapes and locations" and "do everything with her that you can't do in real life."
A Chinese-language channel with nearly 25,000 subscribers saw men sharing videos of their "first loves" or "girlfriend's best friend," altered using AI to strip them. In Nigeria, a network of Telegram channels disseminates deepfakes alongside hundreds of stolen intimate images, highlighting the cross-border nature of this issue.
Telegram's Response and Moderation Efforts
Telegram, as a secure messaging app, allows users to create groups or channels to broadcast content to unlimited contacts. Its terms of service prohibit "illegal pornographic content" on publicly viewable channels and bots, as well as activities deemed illegal in most countries.
The platform stated to the Guardian that deepfake pornography and related tools are explicitly forbidden, with content routinely removed when discovered. Moderators use custom AI tools to proactively monitor public parts of the platform and accept reports to enforce these rules. In 2025, Telegram reported removing over 952,000 pieces of offending material.
Recent Context and Broader Ecosystem
This issue has gained public attention recently, following incidents involving Grok, the generative AI chatbot on Elon Musk's social media platform X, which was used to create thousands of non-consensual images of women. This led to outrage and prompted xAI to restrict such capabilities, while the UK's media regulator, Ofcom, launched an investigation into X.
However, Telegram remains part of a larger ecosystem of forums, websites, and apps that facilitate easy access to graphic, non-consensual content. A report by the Tech Transparency Project found dozens of nudification apps available on major app stores, with 705 million downloads collectively. Apple and Google have taken steps to remove or suspend many of these apps, but the problem persists.
Expert Insights and Societal Impact
Anne Craanen, a researcher at the London-based Institute for Strategic Dialogue, noted that Telegram channels are central to an internet ecosystem devoted to creating and disseminating non-consensual intimate images. They allow users to evade controls from larger platforms like Google and share methods to bypass AI safeguards. Craanen emphasised that the dissemination and celebration of this material reinforce misogynistic undertones, aiming to punish or silence women.
Last year, Meta shut down an Italian Facebook group called Mia Moglie, where men shared intimate images, but investigations have shown that ads for AI nudification tools continue to appear on its platforms, with thousands identified since late last year.
Legal and Regulatory Challenges
AI tools have intensified a global rise in online violence against women, enabling almost anyone to create and share abusive images. In many jurisdictions, particularly in the global south, there are few legal avenues to hold perpetrators accountable. According to 2024 World Bank data, less than 40% of countries have laws protecting women and girls from cyber-harassment or cyberstalking. The UN estimates that 1.8 billion women and girls lack legal protection from online harassment and technology-facilitated abuse.
Campaigners highlight that lack of regulation, combined with issues like poor digital literacy and poverty, makes women and girls in low-income countries particularly vulnerable. Ugochi Ihe of TechHer in Nigeria reported cases where women borrowing from loan apps have been blackmailed using AI, noting that abuse is becoming "more creative" daily.
Real-Life Consequences and Victim Stories
The impact of digital abuse is devastating, leading to mental health difficulties, social isolation, and loss of employment. Mercy Mutemi, a lawyer in Kenya representing victims of deepfake abuse, shared that some clients have been denied jobs or faced disciplinary actions at school due to circulated deepfake images. Ihe added that her organisation has handled complaints from women ostracised by their families after threats involving intimate images from Telegram channels.
She stressed, "Once it has gone out, there's no reclaiming your dignity, your identity. Even if the perpetrator admits it was a deepfake, the reputational damage is unrecoverable due to the vast number of people who may have seen it."
In numerous instances, investigations have revealed that while one Telegram channel may be shut down, another with a nearly identical name often remains active, underscoring the persistent and evolving nature of this digital abuse crisis.