Instagram’s Protections for Teens Reinforce the Need for Safety-by-Design Regulation

AdobeStock_504675403_Editorial_Use_Only_Cropped
September 20, 2024

On September 17, the U.S. House Energy and Commerce Committee scheduled a markup of the Kids Online Safety Act, a bill that would require online platforms to exercise “reasonable care” in designing their sites to avoid mental health harms to children. The next day, Instagram unveiled design changes to Instagram accounts for teens, limiting their visibility to strangers by default, enhancing parental supervision tools, and introducing a “daily limit” prompt after they have used Instagram for 60 minutes. Is this sequence a coincidence? Probably not.

Instagram, and its parent, Meta, have been under mounting pressure to reduce harmful impacts of their products on minors. In 2021, Meta whistleblower Frances Haugen disclosed internal company research showing that Instagram had a negative effect on a sizable proportion of teens, especially teenage girls, even while the company publicly denied such harm. Concurrently, social scientists have been increasingly vocal about what they observe to be a causal relationship between social media use and adverse mental health conditions among teenagers over the past decade. Drawing on these research findings, which are not without detractors, the U.S. Surgeon General recently called for a warning label be added to social media platforms reminding parents that social media use is associated with significant mental health harms among adolescents. Meanwhile, over 40 state attorneys general have sued Meta to hold the company criminally liable for “exploit[ing] young users for profit.”

At each juncture, Meta had an opportunity to introduce common-sense design changes to Instagram accounts used by teenagers. But it was not until public pressure drove momentum for the bipartisan Kids Online Safety Act that Instagram acted. While Meta denies that these changes are the result of legal and regulatory pressure, that assertion does not seem credible. Regulators and parents should view the new protections as confirmation that they are taking the right approach in pushing for design-based platforms reforms.

Targeting design elements makes sense from free speech perspective. It keeps the government out of the business of intervening in platforms’ content policies and enforcement decisions, which would likely violate the First Amendment. Reforming design features also incentivizes platforms to focus their efforts on systemic and ultimately more meaningful harm mitigation, rather than one-off content moderation efforts.

But default settings and other protections that apply only to teens will have a meaningful impact to the extent that Instagram is able to identify which of its users are minors. The current system relies on teenagers self-identifying as such, and many will not do so to avoid parental supervision online. Unfortunately, technology companies have yet to develop an accurate method of independent age verification that does not put users’ data privacy at risk. Until they do, measures that aim to protect only some, but not all users, will leave many falling through the cracks.   

Related

See all