An independent review has raised concerns that most of Instagram’s protections for young users are not working as intended, despite the company promoting its teen accounts as a model for online safety.
Background to Teen Accounts
Meta introduced mandatory teen accounts on Instagram in 2024, later extending the system to Facebook and Messenger in 2025. These accounts were designed to reduce exposure to harmful content, restrict contact with unknown adults, and provide parents with clearer oversight. The initiative followed rising criticism of social media platforms over the impact of online material on children and teenagers.
Findings From the Review
Researchers examined 47 safety tools that Instagram has promoted in recent years. Only eight were judged to operate effectively. Thirty were assessed as substantially ineffective, discontinued, or easy to bypass, while nine offered partial benefits but left major gaps.
Tests with simulated accounts showed that harmful search terms still produced results related to eating disorders, suicide, and violent content. Algorithms also suggested risky material despite controls intended to filter it. Protections meant to block inappropriate contact often failed, allowing adults to interact with minors or encouraging minors to initiate exchanges.
Some widely advertised features had been altered or renamed in ways that made them harder to identify, while other tools were no longer available. The researchers also found that accounts apparently operated by children under 13 were visible on the platform, and some of those posts attracted sexualised comments from adults.
Groups Behind the Study
The review was organised by Northeastern University researchers and the US Cybersecurity for Democracy centre, supported by child-safety groups including the Molly Rose Foundation in the United Kingdom and Parents for Safe Online Spaces in the United States. Arturo Béjar, a former Meta safety executive who had previously raised concerns inside the company, contributed insights.
Advocates said the findings point to weaknesses in Meta’s approach, noting that the failures echo earlier tragedies where young people were exposed to harmful online content.
Company Response
Meta rejected the conclusions, stating that teen accounts limit harmful exposure, reduce unwanted contact, and encourage healthier use patterns such as less late-night activity. The company argued that the review misrepresents its tools and ignores improvements. Internal documents reported by Reuters, however, indicated that Meta staff had already warned management about flaws in automated systems that were supposed to detect self-harm and eating-disorder material.
Political and Regulatory Pressure
The review arrives at a time when regulators in multiple countries are pressing for stronger protections. In the United Kingdom, Ofcom has authority under the Online Safety Act to compel companies to address harmful content affecting children. In the United States, lawmakers have intensified scrutiny of Meta after hearings revealed concerns over both social media and virtual reality platforms used by minors.
Outlook
The investigation portrays Instagram’s teen safety framework as inconsistent and incomplete. While Meta highlights improvements, researchers and advocacy groups argue that many of the tools remain unreliable. With governments in both the UK and US increasing oversight, the durability of these protections may be determined less by corporate policy shifts and more by regulatory enforcement.
Image: The Jopwell Collection / Unsplash
Notes: This post was edited/created using GenAI tools.
Read next:
• Teens Increasingly Use AI to Make Personal Decisions, Prompting Industry Attention
• OpenAI Introduces ChatGPT Pulse, a Paid Feature That Automates Personalized Briefings
