Meta Feels New Pressure to Address Harm to Teenagers
June 24, 2024
Our NYU colleague Jonathan Haidt’s best-selling book, The Anxious Generation, has brought renewed attention to the harm that social media causes to the mental health of at least some teenagers. Several other recent developments on this front are worth pulling together to underscore the urgency of this issue — and the need for action, especially from the largest social media company, Meta:
- Surgeon General Vivek Murthy recently called on Congress to mandate warning labels for social media platforms, following the model of the health-warning labels imposed on the tobacco and alcohol industries. “It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents,” Murthy wrote in a June 17 op-ed in The New York Times. “A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” he added. “Evidence from tobacco studies show that warning labels can increase awareness and change behavior.”
- Separately, our former NYU colleague Laura Edelson has just reported on an illuminating experiment comparing the content that major social media platforms feed to adolescents as compared to what they show to adults. Her main finding was that Meta’s Instagram will show a 13-year-old user sexually explicit and violent content, while TikTok shields adolescents from this type of material. Edelson, a computer scientist who now teaches and does research at Northeastern University, explains what she found when she created 13-year-old and 23-year-old female personas on Instagram Reels and TikTok:
“In the first 45 minutes after setting up an account, Instagram Reels showed my 13- year-old persona videos with performers miming sex acts, videos where they could pause to see the content creator nude, and videos from other teen girls comparing the size of their breasts or butt in lingerie. Reels showed my 23-year-old account similar content. I was not able to detect a meaningful difference in what the feed algorithm would show these two users. In one case, my 13-year-old account was even shown content with a ‘click through’ warning that it contained graphic, violent content.
“By contrast, these two personas were shown very different content on TikTok, both from each other and from what they were shown on IG Reels. Over the 45 minutes I ran the experiment, my 13-year-old persona was shown virtually no racy content. My 23-year-old self was shown some sexually suggestive content, but overall the ‘adult’ experience on TikTok appears to have much less explicit content than the ‘teen’ experience on Reels. The user experiences on TikTok are different enough to make me suspect TikTok is using a different feed algorithm or drawing from a different content pool for minors and adults.”
- Finally, on June 22, the Times published a helpful dispatch distilling prolific lawsuits filed by state attorneys general against Meta, alleging that the company’s Instagram and Facebook platforms ensnare teenagers and children while deceiving the public about these hazards. “Using a coordinated legal approach reminiscent of the government’s pursuit of Big Tobacco in the 1990s, the attorneys general seek to compel Meta to bolster protections for minors,” the Times reported. A Meta spokesperson denied the allegations and said company already does plenty to protect younger users.
But given these various developments, Meta would be well advised to abandon its characteristic defensive crouch and instead confront mounting evidence that at least some vulnerable minors are suffering as a result of their copious time spent online.