TikTok to Roll Out Stronger Age Verification Across the EU

Image: Unsplash

TikTok will begin to roll out new age-verification technology across the EU in the coming weeks, as calls for an Australia-style social media ban for under-16s grow in countries including the UK.

TikTok to Roll Out Stronger Age Verification Across the EU

TikTok to Roll Out Stronger Age Verification Across the EU

TikTok will begin to roll out new age-verification technology across the EU in the coming weeks, as calls for an Australia-style social media ban for under-16s grow in countries including the UK.

ByteDance-owned TikTok, and other major platforms popular with young people, such as YouTube, are coming under increasing pressure to better identify and remove accounts belonging to children. Regulators and child safety campaigners argue that existing checks are too easy to bypass, with self-declared birthdays and limited verification tools allowing underage users to access platforms designed for teens and adults.

The looming EU rollout highlights the direction of travel for the social media industry: companies are now expected to demonstrate they can verify users’ ages more reliably, particularly when services handle large amounts of personal data and serve algorithmically recommended content.

How TikTok’s new EU age-check system works

The system, which has been quietly piloted in the EU over the past year, analyses profile information, posted videos, and behavioural signals to predict whether an account may be belong to a user under the age of 13.

Such automated systems aim to identify patterns that may indicate a user is younger than claimed, such as the tone of videos, the nature of interactions, or other engagement signals. While TikTok already sets its minimum age at 13, enforcement has been a persistent challenge across the industry, largely because children can input false birthdates at signup.

TikTok said accounts flagged by the system will be reviewed by specialist moderators rather than face an automatic ban, and may then be removed. The UK pilot led to the removal of thousands of accounts.

That human review step appears designed to reduce the risk of mistaken removals. Age estimation based on behavioural cues can be imperfect, potentially impacting legitimate users who appear young, as well as creators with youthful voices or styles. TikTok’s approach suggests it is attempting to balance stronger enforcement with safeguards to prevent mass account takedowns driven solely by algorithmic decisions.

Growing pressure on platforms to police child access

The move comes amid widening political debate over how far governments should go in limiting young people’s access to social media. In particular, pressure has intensified since Australia introduced a national restriction.

In December, Australia implemented a social media ban for people under the age of 16. On Thursday (Jan. 15) the country’s eSafety commissioner revealed that more than 4.7m accounts had been removed across 10 platforms – including YouTube, TikTok, Instagram, Snap, and Facebook – since the ban was implemented on Dec. 10.

Supporters of age-based restrictions argue they create a clear enforcement baseline, pushing platforms to more proactively identify and remove underage users. Critics, however, warn such bans may be difficult to enforce without requiring sweeping forms of identity verification that could raise privacy risks for all users, including adults.

The large number of removals in Australia underscores both the scale of underage participation and the potential disruptive impact of stricter enforcement. If European governments pursue similar measures, platforms could face pressure to introduce higher-friction signups or more frequent re-verification, potentially changing how users access and experience social media.

EU scrutiny intensifies under data protection and safety rules

The rollout of the new TikTok system comes as European authorities scrutinise how platforms verify users’ ages under data protection rules.

In Europe, the issue is closely tied to privacy obligations and the handling of minors’ data. Regulators are increasingly focused on whether platforms can prove that children are not being exposed to inappropriate content, targeted advertising, or addictive recommendation systems without adequate safeguards. Stronger age verification is becoming a central expectation, not only as a child safety measure but also as a compliance requirement for data protection and online safety frameworks.

TikTok told Reuters the new technology was built specifically to comply with the EU’s regulatory requirements. The company has worked with Ireland’s Data Protection Commission, its lead EU privacy regulator, while developing the system.

Because TikTok’s main EU regulatory oversight is tied to Ireland, the involvement of the Data Protection Commission suggests the company wants to demonstrate early cooperation and reduce the risk of enforcement action. It may also signal that regulators are moving toward more formal expectations for how platforms validate age claims, rather than relying on lighter-touch measures like pop-up reminders or optional checks.

UK debate resurfaces as Starmer signals openness to ban

Earlier this week, Keir Starmer told MPs he was open to a social media ban for young people in the UK after becoming concerned about the amount of time children and teenagers were spending on their smartphones.

The prime minister told Labour MPs he had become alarmed at reports of five-year-olds spending hours in front of screens each day, as well as increasingly worried about the damage social media was doing to under-16s.

Starmer has previously opposed banning social media for children, believing such a move would be difficult to police and could push teenagers towards the dark web.

The shifting tone from the UK government adds momentum to the broader European debate. Even without an outright ban, the political focus on time spent on screens and potential harms could translate into tougher regulatory expectations around default privacy settings, recommendation algorithms, and age verification standards.

Families and campaigners push for stronger protections

Public pressure is also being fuelled by campaigning families and child safety advocates, who argue that current systems are insufficient.

Earlier this month, Ellen Roome, whose 14-year-old son Jools Sweeney died after an online challenge went wrong, called for more rights for parents to access social media accounts of their children if they die.

The push for bereaved parents’ access rights reflects growing demands for accountability and transparency from platforms, particularly in cases where social media behaviour may be linked to harmful trends or dangerous viral challenges.

Across Europe, lawmakers are exploring different age thresholds and regulatory approaches. The European parliament is pushing for age limits on social media, while Denmark wants to ban social media for those under 15.

What other platforms are doing

TikTok’s move is part of a wider industry shift toward third-party verification tools and stronger enforcement.

Meta, the parent company of Facebook and Instagram, uses the verification company Yoti to verify users’ ages on Facebook.

The contrast highlights different approaches: some companies rely on external verification providers, while TikTok is deploying an internal system that estimates age based on user signals and content behaviour. Both methods, however, raise difficult questions about privacy, accuracy, and the potential for unintended consequences, such as misclassifying users or pushing young people onto unregulated platforms.

Lingering questions over enforcement consistency

TikTok’s new rollout may also revive concerns over how consistently age rules have been enforced in the past.

In 2023 a Guardian investigation found that moderators were being told to allow under-13s to stay on the platform if they claimed their parents were overseeing their accounts.

The EU rollout will therefore be watched closely by regulators and campaigners as a test of whether TikTok’s enforcement is becoming more robust in practice, not just in policy. If the system proves effective, it could become a model for other platforms. If it generates controversy over accuracy or over-collection of data, it could intensify calls for governments to set a single, legally mandated approach to verifying age online.

Data from Ofcom shows that Reddit has officially dethroned TikTok as the UK’s fourth most-visited social media platform. 

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.