Spreading The Big Lie: How Social Media Sites Have Amplified False Claims of U.S. Election Fraud
September 2022
As the 2022 midterms approach, falsehoods about election fraud continue to spread via social media.
The Big Lie that Joseph Biden did not legitimately win the presidency in 2020 has mutated into a forward-looking belief among many Republicans that American democracy more generally no longer functions fairly. In the minds of the most ardent believers, political opponents — Democrats — must be stopped by any means necessary, including the sort of violence unleashed at the U.S. Capitol on January 6, 2021. Social media companies have promised to protect the upcoming midterm elections from mis- and disinformation, but their flawed policies and inconsistent enforcement result in the continued amplification of election denialism, especially in key battleground states, like Arizona, Michigan, and Pennsylvania.
The consequences of election denialism spreading online are grave.
If even a handful of Republican deniers are elected this year to state offices that oversee presidential elections — such as governor and secretary of state — the 2024 process could descend into chaos and violence, making the events of 2020 – 2021 seem tame by comparison.
Related
See allDigital Risks to the 2024 Elections: Safeguarding Democracy in the Era of Disinformation
A new report by Paul M. Barrett, Justin Hendrix and Cecely Richard-Carvajal highlights that this year's primary tech-related threat to elections isn't AI-generated content, but the spread of false, hateful, and violent content on social media platforms.
Reality Check: How to Protect Human Rights in the 3D Immersive Web
Our report written by Mariana Olaizola Rosenblat explains the risks to privacy and safety exacerbated by immersive technologies and recommends steps tech companies and the government can take to minimize those risks.
Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
Our report on safeguarding AI argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.