TikTok Exec Defends AI Safety for Teens Amid UK Job Cuts
TikTok boss insists AI keeps teens safe from harm

In an exclusive interview with Sky News, a senior TikTok executive has firmly defended the platform's increasing reliance on artificial intelligence for content moderation, insisting it does not compromise the safety of its teenage users. The conversation comes amid significant operational changes and intense scrutiny from UK regulators and unions.

The AI Revolution in Content Moderation

Ali Law, TikTok's Director of Public Policy and Government Affairs for northern Europe, explained that a major shift is underway within the company. While TikTok has used AI for years to help moderate content, it is now dramatically increasing its role. The platform currently removes approximately 85% of policy-violating content automatically, without any human intervention.

According to Mr Law, the key change is the vastly improved sophistication of AI models. He stated that newer systems can understand nuanced context in a way previous technology could not. "A great example is being able to identify a weapon," he told Sky News. "Whereas previous models may have been able to identify a knife, newer models can tell the difference between a knife being used in a cooking video and a knife in a graphic, violent encounter."

TikTok asserts that it sets a high benchmark for new moderation technology, ensuring it matches or exceeds existing processes. These AI changes are being introduced gradually with continuous human oversight to quickly address any performance issues.

Human Moderator Cuts and UK Safety Concerns

This pivot towards AI has a direct human cost. TikTok is proposing to cut more than 400 human moderator jobs in London alone, with reports suggesting some roles may be rehired in other countries. The move has sparked significant concern within the UK.

Paul Nowak, General Secretary of the TUC union, has publicly stated that TikTok has repeatedly "failed to provide a good enough answer" on how these cuts would impact user safety. Furthermore, the UK's Science, Technology and Innovation Committee, led by Labour MP Dame Chi Onwurah, has launched a formal probe into the job reductions.

Ms Onwurah labelled the cuts "deeply concerning," arguing that AI "just isn't reliable or safe enough to take on work like this" and warning of a "real risk" to UK users. When Sky News directly asked if he could guarantee UK users' safety post-cuts, Mr Law redirected the focus to outcomes. "Our focus is on making sure the platform is as safe as possible," he said, emphasising the deployment of advanced technology working alongside thousands of global trust and safety professionals.

New Wellness Tools and a Defence of the Platform

The interview took place at TikTok's European headquarters in Dublin following an online safety event where the company unveiled new features. A central announcement was a new in-app Time and Wellbeing hub, developed in partnership with the Digital Wellness Lab at Boston Children's Hospital.

This hub gamifies mindfulness techniques, offering affirmations, encouraging users not to use TikTok overnight, and helping them lower screen time. Cori Stott, Executive Director of the Digital Wellness Lab, explained that the tool was built directly into the app because young people want wellness resources "where they already are."

Addressing broader concerns about social media's impact on youth mental health, Mr Law, a parent himself, expressed personal investment in online safety. He detailed the protective measures already in place for younger users, including private profiles for under-16s, no direct messaging access, a default one-hour screen time limit, and a 10 PM sleep reminder.

"The experience is one that does try and promote a balanced approach to using the app," Mr Law asserted. He expressed confidence in the changes, concluding that the real power lies in the combination of the best technology and human experts working together, a model he insists will continue to guide TikTok's safety strategy.