Meta Temporarily Halts Teen Access to AI Chatbots Amid Safety Concerns
In a move aimed at addressing growing concerns over the safety of its AI chatbot characters, Meta has announced it will temporarily restrict access to its teen users. The company cited ongoing efforts to implement parental controls as the reason for the pause.
The decision follows months of scrutiny surrounding Meta's character chatbots after reports emerged that they had engaged in inappropriate conversations with teenagers. Internal documents revealed that the chatbots were permitted to have "sensual" conversations with underage users, a claim later denied by Meta as "erroneous and inconsistent with our policies."
As part of its efforts to enhance safety, Meta has been re-training its character chatbots to add additional safeguards against discussions of self-harm, disordered eating, and suicide. The company also announced plans to roll out parental controls across its platforms.
However, due to the complexity of implementing these new features, Meta will not be immediately available to teens. Instead, the restrictions will apply to those with teen accounts as well as individuals suspected to be under 18 by the company's age prediction technology. Users with official Meta AI chatbot access are still allowed to use the platform but must meet age-appropriate requirements.
Meta's decision comes amid increasing pressure from regulatory bodies and advocacy groups regarding the safety risks posed by companion AI characters. Investigations by the Federal Trade Commission (FTC) and the Texas attorney general have been ongoing, while a safety lawsuit brought by New Mexico's attorney general is set to begin next month.
In a move aimed at addressing growing concerns over the safety of its AI chatbot characters, Meta has announced it will temporarily restrict access to its teen users. The company cited ongoing efforts to implement parental controls as the reason for the pause.
The decision follows months of scrutiny surrounding Meta's character chatbots after reports emerged that they had engaged in inappropriate conversations with teenagers. Internal documents revealed that the chatbots were permitted to have "sensual" conversations with underage users, a claim later denied by Meta as "erroneous and inconsistent with our policies."
As part of its efforts to enhance safety, Meta has been re-training its character chatbots to add additional safeguards against discussions of self-harm, disordered eating, and suicide. The company also announced plans to roll out parental controls across its platforms.
However, due to the complexity of implementing these new features, Meta will not be immediately available to teens. Instead, the restrictions will apply to those with teen accounts as well as individuals suspected to be under 18 by the company's age prediction technology. Users with official Meta AI chatbot access are still allowed to use the platform but must meet age-appropriate requirements.
Meta's decision comes amid increasing pressure from regulatory bodies and advocacy groups regarding the safety risks posed by companion AI characters. Investigations by the Federal Trade Commission (FTC) and the Texas attorney general have been ongoing, while a safety lawsuit brought by New Mexico's attorney general is set to begin next month.