Digital Risks to the 2024 Elections: Safeguarding Democracy in the Era of Disinformation
February 2024
Elections in the U.S. and around the world in 2024 face daunting digital risks.
A new report from the NYU Stern Center for Business and Human Rights argues that the leading tech-related threat to this year’s elections stems not from the creation of content with artificial intelligence but from a more familiar source: the distribution of false, hateful, and violent content via social media platforms.
Despite the disruptions and violence that roiled the U.S. presidential election in 2020 and Brazil’s election in 2022, major platform companies have retreated from some of their past commitments to promote election integrity.
Social media companies like Meta (parent of Facebook, Instagram, and WhatsApp); Google (YouTube); and X, formerly known as Twitter, have imposed layoffs and policy changes that have had the effect of diminishing election integrity efforts.
Related
See allReality Check: How to Protect Human Rights in the 3D Immersive Web
Our report written by Mariana Olaizola Rosenblat explains the risks to privacy and safety exacerbated by immersive technologies and recommends steps tech companies and the government can take to minimize those risks.
Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
Our report on safeguarding AI argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them
Our report describes how extremist actors are exploiting online gaming sites to disseminate violent ideologies, network with like-minded people, and perpetrate real-world harm.