TikTok Empowers Users to Control AI Content Flood on Feeds
TikTok gives users control over AI content

TikTok is handing control back to its users, launching a new feature that will allow them to directly manage the amount of artificial intelligence-made content appearing in their personal feeds. The move comes as the video-sharing giant revealed it now hosts a staggering more than 1 billion AI-generated videos on its platform.

Taking Control of Your Feed

The new control is being introduced as part of the 'manage topic' settings within the TikTok app. Users will soon find an option specifically for 'AI-generated content'. By toggling this setting, they can choose to either reduce or increase the volume of such content they see, tailoring their feed to their personal preferences.

This development arrives amid a significant surge in AI-produced material online, driven by the rapid advancement and availability of new video-generation tools like OpenAI's Sora and Google's Veo. The scale of this trend was highlighted by a Guardian investigation in August, which found that nearly one in ten of the fastest-growing YouTube channels globally are dedicated solely to AI-generated videos, much of which qualifies as low-quality 'AI slop'.

Jade Nester, TikTok’s European director of public policy for safety and privacy, explained the company's rationale: "We know from our community that many people enjoy content made with AI tools, from digital art to science explainers, and we want to give people the power to see more or less of that, based on their own preferences." The announcement was made at TikTok's annual European trust and safety forum held in Dublin.

Labelling, Literacy, and AI Moderation

To ensure transparency, TikTok is bolstering its labelling system. The platform already mandates that creators label 'realistic' AI-made content, with policies in place to ban harmful deepfakes of public figures or crisis events. Unlabelled realistic AI videos can be removed for violating community guidelines.

Now, the app will also automatically attach an 'AI-made' watermark to content created with its own AI tools or identified by the C2PA, an industry-wide initiative for marking AI-generated material. This measure is designed to prevent users from circumventing the labelling process.

Alongside these features, TikTok is committing to user education with the launch of a $2m (£1.5m) AI literacy fund. This fund will support experts and organisations, including the non-profit Girls Who Code, in creating educational content about the responsible use of AI.

The Human Moderator Controversy

This push for greater technological control comes amidst controversy over TikTok's own use of AI. The company is facing scrutiny for plans to make 439 UK-based content moderators in its London trust and safety team redundant, a move that has raised concerns among trade unions and online safety experts about humans being replaced by automated systems.

Brie Pegum, TikTok’s global head of program management for trust and safety, defended the strategy. She stated that human moderation would continue to play a crucial role, but argued that AI can help protect employees by filtering out the most distressing content before it reaches human reviewers. According to TikTok, its automated systems have already led to a 76% decrease in shocking and graphic content viewed by its human moderation team over the past year.

"We think that it’s essential to balance humans and technology to keep the platform safe," Pegum said. "We’ll have humans as part of that process, but we also think it’s very important to reduce exposure to harmful content as quickly as possible, which is where a lot of the machine support is really helping us work at speed."

The new AI content control feature is currently in a testing phase over the next few weeks, with a global rollout expected to follow.