TikTok is set to launch a new age verification system across the European Union, marking a significant step in its efforts to tackle underage users. The move comes as pressure mounts on social media giants to improve their methods for identifying and removing child accounts.
The new technology, which has been quietly tested in EU countries over the past year, uses a combination of profile information, video analysis, and behavioral signals to predict whether an account belongs to someone under 13. Accounts flagged by the system will be reviewed by human moderators before being potentially removed.
TikTok claims that its approach is designed to comply with EU data protection rules and has worked closely with Ireland's Data Protection Commission to develop the system. In contrast, other major platforms such as YouTube have employed more piecemeal approaches to age verification.
The rollout of TikTok's new system comes amid growing concerns about the impact of social media on young users. A recent ban in Australia has seen over 4.7 million accounts removed across 10 platforms since December, with experts warning that similar measures could be necessary in the EU.
UK politicians have also expressed increasing alarm at the amount of time children spend on their smartphones, with Labour leader Keir Starmer suggesting he may support a social media ban for under-16s. The European parliament is pushing for age limits on social media, while Denmark wants to ban platforms altogether for those under 15.
However, the effectiveness of these measures remains uncertain, particularly when it comes to identifying and removing child accounts that have already been created. A recent investigation found that moderators were being told to allow under-13s to stay on platforms if their parents claimed they were overseeing their accounts – a loophole that could potentially be exploited by users seeking to avoid detection.
As the debate over social media regulation continues, TikTok's new age verification system represents a significant step forward in an effort to protect vulnerable young users. But with the European Union's data protection rules still evolving, it remains to be seen whether these efforts will be enough to address the growing concerns surrounding social media and child safety.
The new technology, which has been quietly tested in EU countries over the past year, uses a combination of profile information, video analysis, and behavioral signals to predict whether an account belongs to someone under 13. Accounts flagged by the system will be reviewed by human moderators before being potentially removed.
TikTok claims that its approach is designed to comply with EU data protection rules and has worked closely with Ireland's Data Protection Commission to develop the system. In contrast, other major platforms such as YouTube have employed more piecemeal approaches to age verification.
The rollout of TikTok's new system comes amid growing concerns about the impact of social media on young users. A recent ban in Australia has seen over 4.7 million accounts removed across 10 platforms since December, with experts warning that similar measures could be necessary in the EU.
UK politicians have also expressed increasing alarm at the amount of time children spend on their smartphones, with Labour leader Keir Starmer suggesting he may support a social media ban for under-16s. The European parliament is pushing for age limits on social media, while Denmark wants to ban platforms altogether for those under 15.
However, the effectiveness of these measures remains uncertain, particularly when it comes to identifying and removing child accounts that have already been created. A recent investigation found that moderators were being told to allow under-13s to stay on platforms if their parents claimed they were overseeing their accounts – a loophole that could potentially be exploited by users seeking to avoid detection.
As the debate over social media regulation continues, TikTok's new age verification system represents a significant step forward in an effort to protect vulnerable young users. But with the European Union's data protection rules still evolving, it remains to be seen whether these efforts will be enough to address the growing concerns surrounding social media and child safety.