TikTok is set to expand its age verification technology across Europe in the coming weeks as concerns about minors using social media platforms intensify. The move comes amidst growing calls for stricter regulations, with several countries considering bans or limits on social media use by under-16s.
The platform's new system will analyze user profile information, posted videos, and behavior signals to predict whether an account may belong to a minor under the age of 13. This technology will be used in conjunction with human moderators to review accounts flagged as potentially belonging to minors, rather than automatically banning them.
Users will have the option to appeal against the removal of their account if an error has been made, and can provide additional verification such as facial age estimation by Yoti, credit card authorization, or government-approved identification. The European pilot program led to the removal of thousands of accounts, demonstrating the effectiveness of this approach.
TikTok maintains that its system complies with data and privacy laws, stating that it only uses the predicted likelihood of a user being under 13 to decide whether to send an account to human moderators or monitor and improve the technology. The company emphasizes its commitment to protecting young users, particularly teens, while ensuring safety in a "privacy-preserving manner".
Other social media platforms, including Meta (which owns Facebook), are also exploring similar age verification technologies. In December, Australia implemented a social media ban for minors under 16, resulting in over 4.7 million accounts being removed across multiple platforms.
The UK government has expressed concerns about the amount of time children and teenagers spend on their smartphones, with Prime Minister Keir Starmer suggesting he is open to a social media ban for young people. The European parliament and Denmark are also pushing for age limits or bans on social media use by minors.
TikTok's expansion of its age verification technology aims to address these concerns and provide a more effective solution to ensure the safety and well-being of young users across Europe.
The platform's new system will analyze user profile information, posted videos, and behavior signals to predict whether an account may belong to a minor under the age of 13. This technology will be used in conjunction with human moderators to review accounts flagged as potentially belonging to minors, rather than automatically banning them.
Users will have the option to appeal against the removal of their account if an error has been made, and can provide additional verification such as facial age estimation by Yoti, credit card authorization, or government-approved identification. The European pilot program led to the removal of thousands of accounts, demonstrating the effectiveness of this approach.
TikTok maintains that its system complies with data and privacy laws, stating that it only uses the predicted likelihood of a user being under 13 to decide whether to send an account to human moderators or monitor and improve the technology. The company emphasizes its commitment to protecting young users, particularly teens, while ensuring safety in a "privacy-preserving manner".
Other social media platforms, including Meta (which owns Facebook), are also exploring similar age verification technologies. In December, Australia implemented a social media ban for minors under 16, resulting in over 4.7 million accounts being removed across multiple platforms.
The UK government has expressed concerns about the amount of time children and teenagers spend on their smartphones, with Prime Minister Keir Starmer suggesting he is open to a social media ban for young people. The European parliament and Denmark are also pushing for age limits or bans on social media use by minors.
TikTok's expansion of its age verification technology aims to address these concerns and provide a more effective solution to ensure the safety and well-being of young users across Europe.