Meta is tightening the rules around how its AI-powered chatbots interact with teenagers, following growing criticism over safety concerns. The company announced new measures designed to prevent young users from engaging in conversations on sensitive or potentially harmful topics.
Restricting Sensitive Conversations
Under the latest changes, Meta’s AI assistants will no longer respond to teen inquiries about self-harm, suicide, eating disorders, or romantic and sexual themes. Instead of engaging in these conversations, the bots will now direct teens toward professional resources or safe alternatives. The company described these steps as temporary, with more permanent protections expected to roll out in the coming months.
Previously, Meta had permitted its chatbots to provide responses on these topics, assuming they were presented in a “safe” manner. A spokesperson admitted that the company has since realized this approach left room for harmful interactions. Meta now acknowledges the importance of applying stronger safeguards as it learns more about how young audiences interact with AI.
Limited Access to AI Characters
In addition to retraining the bots, Meta is restricting which AI “characters” are available to teens. Some of the user-created personalities on platforms like Instagram and Facebook have been criticized for overly sexualized or inappropriate interactions. Teen users will now only have access to a smaller set of characters designed to encourage learning, creativity, and positive experiences.
Backlash and Policy Shift
The company’s decision comes on the heels of a major controversy. A recent investigation revealed that internal policy documents at Meta once allowed chatbot responses that could be read as romantic or sexual toward underage users. The revelations included examples of AI-generated messages praising a teen’s “youthful form” as “a masterpiece.” These disclosures fueled public outcry and prompted lawmakers to demand answers.
Senator Josh Hawley launched a formal probe into Meta’s AI practices, while 44 state attorneys general issued a joint letter condemning the risks posed by such technology. The group emphasized that child safety must remain a top priority and argued that some chatbot responses could cross into territory prohibited by law.
Looking Ahead
Meta declined to share how many minors currently use its AI chatbots, or whether these restrictions might reduce engagement. The company maintains that these new limitations are an important first step, with broader policy upgrades planned in the near future.
For now, the move reflects Meta’s attempt to balance innovation with responsibility — ensuring that teenagers can explore AI tools without being exposed to unsafe or exploitative interactions.