‘We Want You To Be A Proud Boy’ How Social Media Facilitates Political Intimidation and Violence
September 2024
With a volatile election approaching, our new report reviews hundreds of social science studies on the question of the role social media plays in facilitating political intimidation and violence.
Our main finding is that the research consistently shows that social media is exploited to facilitate political intimidation and violence. What’s more, certain features of social media platforms make them particularly susceptible to such exploitation, and some of those features can be changed to reduce the danger.
The report includes a series of recommendations for addressing harms exacerbated by social media.
For industry:
- Sound the alarm. To reduce risks, social media companies first need to end their tendency to deflect and obfuscate and instead acknowledge the role that their platforms play in facilitating political intimidation and violence. Bold public statements recognizing the phenomenon and accepting partial responsibility are the necessary precursor to meaningful action.
- Put more people on the content moderation beat. In 2022 and 2023, most major social media companies laid off “trust and safety” employees — the people who devise and enforce policies aimed at reducing online hatred and incitement. This ill-advised retreat must be reversed. AI can handle a lot of content moderation, but human judgment is crucial, and more humans, especially counter-extremism experts, are needed.
- Make design changes to mitigate harm. Rather than allow user anonymity, social media companies should require users to verify their identity (with provisions for storing verification data securely and/or erasing it once it’s no longer needed). Platforms should monitor groups for the prevalence of content advocating violence, regardless of partisan orientation. Invitations to, and recommendations of, volatile groups should be shut down, as should be the groups themselves if they become dangerous. More broadly, recommendation systems should be redesigned to reduce, rather than heighten, sectarianism. Sheer user engagement, which may reflect hateful and other sensationalistic posts, can be reduced as a criterion for amplification.
For government:
- Enforce existing laws. With healthy respect for free speech protected by the First Amendment, the U.S. Departments of Justice and Homeland Security need to be vigilant about enforcing criminal laws banning political intimidation and the incitement of violence. The Federal Trade Commission, Federal Election Commission, and their state counterparts also must use their full authority to enforce existing laws against election fraud, voter suppression, and cyberattacks.
- Protect election workers. These mostly career employees typically live and work without the benefit of the sort of protections provided to judges, lawmakers, and executive branch officials. To arrest the continued exodus of election workers, governments should raise the stakes for those who seek to intimidate these public servants by hardening existing penalties and introducing new ones that take into account the coordinated disinformation campaigns that lie behind the harassment.
- Enhance federal authority to oversee digital industries. Longer term, the federal government needs to regulate digital industries in a more systematic fashion. Specifically, Congress should expand the consumer protection authority of the Federal Trade Commission to accomplish sustained oversight of digital industries. This approach would require additional funding and recruitment of technically adept personnel.
Related
See allSetting Higher Standards: How Governments Can Regulate Corporate Human Rights Performance
Our report, released three months after the landmark CSDDD entered into force, provides a roadmap for regulators and companies navigating a new era of corporate human rights responsibility.
Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation
Our new report on encrypted messaging platforms reveals how political propagandists are exploiting these tools to manipulate voters globally, while offering recommendations for platforms, policymakers, and researchers to mitigate these threats without undermining end-to-end encryption.
Digital Risks to the 2024 Elections: Safeguarding Democracy in the Era of Disinformation
A new report by Paul M. Barrett, Justin Hendrix and Cecely Richard-Carvajal highlights that this year's primary tech-related threat to elections isn't AI-generated content, but the spread of false, hateful, and violent content on social media platforms.