The Internet has connected people around the world at an unprecedented speed and scale. Yet, these same characteristics are being used to spread distrust and misinformation. Internet companies must find new ways to minimize harmful content while keeping the Internet accessible, open, and diverse.
The Internet has provided countless benefits for users across the globe, including access to information, economic empowerment, education and communication. At the same time, extremist and false content have polluted the largest Internet platforms. Russian interference in the U.S. elections demonstrates the potential for such harmful content to erode democracy and sow distrust. Social media companies now face the enormous challenge of developing policies and advertising strategies that prevent the systematic exploitation of their platforms.
Diagnosing the Problem
Since 2016, the Center has examined the ways bad actors, such as the Russian government and ISIS exploit the vulnerabilities of social media platforms and threaten civil discourse, democracy, and human rights in this country and around the world.
Our first report, Harmful Content: The Role of Internet Platform Companies In Fighting Terrorist Incitement and Politically Motivated Disinformation identified the range of strategies commonly used by these bad actors to promote harmful content on the largest Internet platforms.
In 2018, we published Combating Russian Disinformation: The Case for Stepping Up the Fight Online, an in-depth analysis of Russian disinformation campaigns. The report concludes by recommending a series of steps that industry and governments can take to overcome this and future digital threats to democracy.
In early 2019, we released Tackling Domestic Disinformation: What Social Media Companies Need to Do, a detailed look at false content generated in the U.S. that undermines democracy.
We work with stakeholders to define a way forward that combines the right mix of government oversight, company self-regulation, and public education and action.