Meta Rolls Out Enhanced Safety Features for Young Users

Meta has rolled out key enhancements to safeguard under‑18 users on Facebook and Instagram. The platform has removed over 635,000 accounts linked to predatory behavior targeting minors and introduced advanced Direct Message (DM) safety features, algorithmic restrictions, and nudity protection tools.

Meta Rolls Out Enhanced Safety Features for Young Users
Meta Rolls Out Enhanced Safety Features for Young Users

Meta has rolled out key enhancements to safeguard under‑18 users on Facebook and Instagram. The platform has removed over 635,000 accounts linked to predatory behavior targeting minors and introduced advanced Direct Message (DM) safety features, algorithmic restrictions, and nudity protection tools. With these updates, teens now see additional context (like account creation dates), one‑tap “block & report” functions, and stricter content controls. Meta has also extended protections to adult-managed accounts featuring kids, limiting exposure to potentially suspicious viewers. These moves come amid legal scrutiny and broad global pressure to protect youth from online harm.

  • Account Purge: Removed 635,000 accounts—135,000 for targeted sexual comments and 500,000 linked to abusive accounts.

  • DM Safety Enhancements: Teens see when a messaging account was created and receive safety tips before engaging; includes combined block & report option.

  • Account Insight: Display of account creation date helps teenagers spot suspicious or newly created profiles. 

  • Algorithmic Filtering: Adult-managed accounts featuring children are restricted from being recommended to potentially suspicious adults and vice versa; offensive comments are filtered. 

  • Nudity Protection: 99% of teens keep blurred image filters enabled; nudity protection is the default for teen accounts. 

  • Automatic “Teen Accounts”: AI detects under-13 users and defaults accounts to teen settings; messages and account visibility are more restricted.

  • Expanded Protections: Adults running accounts featuring minors now receive Teen account protections—including hidden words and strict messaging defaults.

In June alone, teens blocked over 1 million suspicious accounts and reported another 1 million after receiving safety warnings—indicating users are engaging with the tools. Meta’s recent removals are among its largest sweeps targeting predatory behavior.

Meta is under mounting legal scrutiny in the U.S. and abroad, including lawsuits alleging harm to teen mental health. The timing suggests the company is positioning these updates as evidence of proactive reform. 

Regulators in Australia, the U.S., and Europe are exploring or implementing age verification laws and mental health protections. Meta’s safety measures offer preemptive alignment toward anticipated legislation like the U.S. Kids Online Safety Act.

  • Users now receive real-time alerts on unknown DMs, including when the sender’s account was created—helping identify suspicious or newly established accounts.

  • The block & report feature in DMs consolidates actions into one tap for quick response and feedback. 

  • Meta’s nudity protection blurs potentially inappropriate images automatically. Over 40% of the blurred content received remained unread, but encouraged users to reconsider forwarding images. 

  • Teen accounts default to private; messaging is restricted to mutual followers.

  • Adult-run child-focused profiles are similarly limited—especially in visibility to suspicious adults.

  • Cyber-safety advocates welcome the transparency tools and simpler reporting workflows.

  • Some privacy experts express concern that age-verification policies could drive teens to VPNs or unmoderated spaces—especially under mandatory systems proposed in legislation like Utah’s age-verification law.

  • Meta’s update is echoed in broader tech movement: age verification, parental control APIs, and punitive legislation like Australia’s social media age ban.

  • Laws such as the U.S. Kids Online Safety Act (KOSA) require platforms to minimize exposure to harmful content and offer transparency and parental tools. Meta’s new controls resemble early compliance features.

Meta’s latest safety upgrades mark a significant evolution in its approach to protecting teen and child users. By removing predatory accounts at scale and enhancing tools for safe messaging, nudity protection, and account transparency, the company is making strides toward creating safer digital spaces for young users.