Hundreds of accounts on TikTok are using artificial intelligence to generate content, some of it harmful, and are collectively racking up billions of views, according to a new investigation. The findings raise serious questions about content moderation and transparency on one of the world's most popular social platforms.
Automated Accounts and Viral Content
Researchers from the Paris-based non-profit AI Forensics uncovered a network of 354 AI-focused accounts that published approximately 43,000 posts using generative AI tools. Over a single month, from mid-August to mid-September, this content accumulated a staggering 4.5 billion views.
The report suggests these accounts are designed to exploit TikTok's recommendation algorithm. One tactic involved posting up to 70 times a day or at consistent times, a strong indicator of automation. Most of these accounts were created at the start of 2024, indicating a coordinated effort to flood the platform.
Harmful Narratives and Inadequate Labelling
The AI-generated material spanned several concerning categories. Half of the most active accounts focused on sexualised depictions of the female body, often featuring stereotypically attractive AI women. Alarmingly, some appeared to represent underage girls.
Other content mimicked legitimate broadcast news, using fake segments to push anti-immigrant narratives and falsely featuring brands like Sky News and ABC. The researchers found that less than 2% of this content carried TikTok's official AI label, and about half was not labelled at all, massively increasing its potential to deceive viewers.
"The blurring line between authentic human and synthetic AI-generated content on the platform is signalling a new turn towards more AI-generated content on users’ feeds," the report warned.
Platform Response and Calls for Action
Following the investigation, dozens of the identified accounts were deleted. TikTok stated that the report's claims were "unsubstantiated" and argued the issue affects multiple platforms, not just theirs. A spokesperson highlighted the company's efforts to remove harmful AI content, block bot accounts, and develop AI-labelling technologies.
However, AI Forensics expressed scepticism about the effectiveness of TikTok's new feature allowing users to limit AI content, citing the platform's failure to consistently identify such material. They urged more radical solutions, suggesting platforms should consider segregating AI-made content from human-created posts or enforcing systematic, visible labelling.
The report also noted the prevalence of so-called "slop" – bizarre, nonsensical AI content like animals in an Olympic diving contest – which was among the most viewed material. While sometimes entertaining, this content is primarily designed to clutter feeds and chase virality.
With TikTok recently revealing it hosts at least 1.3 billion AI-generated posts, the challenge of managing synthetic content is only set to grow, demanding more robust action from tech giants.