Children Easily Bypass Social Media Safety Checks, Exposing Them to Harmful Content
A recent cybersecurity investigation has uncovered alarming vulnerabilities in social media platforms' child protection measures, revealing that minors can access disturbing content including execution videos and fraudulent schemes with minimal effort.
Platform Vulnerabilities Exposed
Despite tech companies implementing increasingly sophisticated safety features such as age verification systems and restricted accounts for teenagers, researchers from cybersecurity firm Malwarebytes discovered these safeguards can be circumvented using simple techniques that any curious child might employ.
'It was very easy,' said Pieter Arntz, a senior researcher at Malwarebytes. 'A little curiosity and the search bar for the most part found toxic content.'
Roblox Communities Harbor Fraudulent Activity
The investigation focused on popular platforms including gaming site Roblox, which requires age verification for direct messaging but not for joining communities that function similarly to chat rooms. Researchers created an account claiming to be five years old—the minimum age for Roblox users—and successfully accessed communities flagged by cybersecurity experts as promoting fraudulent activities.
One such community, Fullz Ent., with over 740 members, presents itself as offering 'High quality Clothing' while actually using criminal terminology. According to Arntz, 'Fullz' is slang in cybercriminal circles for stolen personal information, while 'new clothes' refers to stolen payment card data.
'Such terms wouldn't probably be flagged as criminal by most parents,' Arntz noted, highlighting the sophisticated nature of these deceptive practices.
Following the December investigation, Roblox implemented mandatory facial age verification for chat features in January to limit communication between adults and children under 16.
YouTube's Guest Account Loophole
Researchers discovered that underage users can access inappropriate YouTube content without accounts by creating 'Guest' accounts through Google, YouTube's parent company. Using this method, investigators viewed a French news outlet's video showing an ISIS member's execution and accessed content from accounts offering fraud tutorials.
While YouTube Kids provides filtered content with parental controls, the standard YouTube platform remains accessible through this guest account vulnerability.
Age Verification Shortcomings Across Platforms
Malwarebytes found that adult content on age-restricted platforms like Twitch and TikTok was 'easy to fake' through self-declaration of age. The report stated: 'While most platforms require users to be 13+, a self-declaration is often enough. All that remains is for the child to register an email address with a service that doesn't require age verification.'
On Twitch, researchers accessed an account offering 'call-girl services' in India after self-reporting as over 18, complete with links to escort advertisements and WhatsApp contact information.
TikTok's birthday verification system creates different experiences based on user age declarations, with those claiming to be adults avoiding the enhanced parental controls and content restrictions applied to under-18 accounts. This allowed researchers to find tutorials about credit card fraud and identity theft.
Instagram's Teen Account Limitations
Even with Instagram's teen accounts—introduced in 2024 for users aged 13-17 with default privacy settings and strict content filters—researchers created an account listing age as 15 and successfully found profiles promoting financial fraud using the app's search function.
Meta, Instagram's parent company, extended similar teen protection features to Facebook users the following year.
The Core Problem: Tech-Savvy Youth vs. Inadequate Systems
Arntz emphasized that the findings don't demonstrate failure by any single platform but rather highlight a systemic issue: 'Today's young people are simply more tech-savvy than the adults designing these child safety policies.'
Some children are even using AI-generated documents to bypass identity verification scans, according to the researcher.
'The problem isn't children being especially deceptive; it's that age gates rely on self-reported trust in an environment where anonymity is effortless,' Arntz explained. 'Without robust digital identity verification or parental supervision, these measures serve more as legal cover for companies than real protection for young users.'
Platform Responses and Ongoing Challenges
Roblox told investigators the company is moving beyond self-reported age checks, having been the first gaming company to embrace age verification. A spokesperson stated: 'We also restrict access to certain content based on a player's verified age, have a wide range of additional safety features like default chat filters, and have extremely strict policies to guard against users discussing or engaging in any form of illegal activity.'
Twitch reported it is 'continuing to increase' investment in youth safety tools, including content filters and 24/7 monitoring for policy violations using automated tools and behavioral signals. The platform employs machine-learning technology to estimate whether users are under 13.
Google, TikTok, and Meta were approached for comment regarding the investigation's findings.
The investigation underscores the ongoing challenge of protecting children in digital environments where determined young users can often outmaneuver safety measures designed by adults, highlighting the need for more sophisticated verification systems and increased parental awareness of online risks.



