Instagram to Censor Teen Accounts Under New Global Safety Update
Instagram is rolling out a global update that censors teen accounts with PG-13 content filters and stricter parental controls to protect young users.

Instagram has announced a sweeping new update to better protect teenage users, introducing global content filters that will restrict what teens can see and interact with on the platform.
The update, unveiled by Meta, will automatically place all users under 18 years old into a default “13+ content setting” a mode designed to ensure teens only see material suitable for a PG-13 audience.
Under this setting, posts containing strong language, drug use, dangerous stunts, or mature themes will no longer appear in teen feeds. Meta confirmed that the filter system will also block searches for restricted topics, even if users try to bypass it using misspellings or coded keywords.
In addition, Meta is launching a new “Limited Content” mode, which gives parents greater control over their teens’ Instagram experience. Through this feature, parents can decide what kind of posts their children can view, comment on, or interact with, and teens will not be able to change these settings without parental approval.
The social media giant also plans to enhance its AI-driven age verification tools to detect users who may have provided false birth information. These systems will automatically shift such users into appropriate age-based content categories to maintain safety standards.
Meta says this new update builds upon previous safety measures, including private account defaults, content sensitivity filters, and restricted nighttime notifications for teen accounts — all of which will continue to operate alongside the new changes.
Furthermore, Instagram will now prevent teenagers from following accounts known for regularly posting inappropriate or explicit content. For existing followers, these connections will be automatically blocked or removed.
According to Meta, the rollout will begin in the United States, Canada, the United Kingdom, and Australia, with plans to expand the update to other countries later this year.
The company emphasized that this move reflects its ongoing commitment to creating a safer digital environment for younger audiences, amid rising global concerns over the impact of social media on mental health and well-being.