Twitter Leadership Fails The Infowars Test
August 10, 2018
This week several major Internet companies announced that they were taking down Infowars, Alex Jones’ inflammatory online platform. Notably absent from the list was Twitter. This is a case that is testing the core values of the major Internet providers and the practical applications of their community standards. And to date, Twitter is failing those tests.
Alex Jones embodies our spreading political polarization, peddling malicious falsehoods and exacerbating racial and ethnic tensions in American society. Since he founded Infowars in 1999, Jones has built a massive online following, in part because of his promotion of provocative but clearly false assertions. He has advanced the view that September 11, 2001, attacks on the World Trade Center and the Pentagon, were staged by the U.S. government. In 2012, he opined that the killing of 26 students and teachers at a Newtown, Conn. school was staged by left-wing forces seeking to promote gun control.
In finally taking down Infowars accounts, Apple, Facebook, Google and Spotify made value judgments about the type of content they would not tolerate on their sites. These are private companies, and each has the power–I would say the responsibility–to make such decisions. In exercising these judgments, each company must balance the importance of free speech and allow a diversity of views against the dangers of harmful content–for example, Jones’ demonstrably false assertions about Sandy Hook or “hate speech” that further divides our society. These corporate values are codified in each company’s community standards.
When dealing with Infowars, most of these companies relied on their prohibitions against “hate speech.” Google, on Google Play, cited its standard barring “apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization.” Similarly, Apple explained its decision to take down Infowars as upholding a company rule that “does not tolerate hate speech.” Apple added: “We have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users.”
Facebook also based its decision on concerns about hate speech. But the company went out of its way to say the decision was not related to disinformation: “While much of the discussion around Infowars has been related to false news . . . none of the violations that spurred today’s removals were related to this.”
Twitter declined to follow the lead of these other companies. CEO Jack Dorsey explained: “We know that’s hard for many, but the reason is simple: He hasn’t violated our rules. We’ll enforce if he does. And we’ll continue to promote a healthy conversational environment by ensuring tweets aren’t artificially amplified.”
Dorsey later tweeted: “If we succumb and simply react to outside pressure, rather than straightforward principles we enforce (and evolve) impartially regardless of political viewpoints, we become a service that’s constructed by our personal views that can swing in any direction.”
Dorsey is right to call for “straightforward principles,” but thus far he has failed to adopt needed principles or apply them to the company he leads. One such principle should be that demonstrably false content, especially when it is politically motivated and designed to mislead, should not be promoted on the Internet. The deaths at Sandy Hook were senseless murders, not part of a plot to promote gun control. And when Internet companies spot this type of politically motivated disinformation spreading across their platforms, they have a duty to act.
Their actions may vary depending on the circumstances. In some instances, they may need to delete sites, as they have done with Infowars. Other times they can demote the prominence of such information or juxtapose contrary views on the same site or post notifications that certain content is disputed. It will be important to further develop guideposts to help these companies make tough choices. But it is no longer a viable option for them to say this is someone else’s problem.
Dorsey sidesteps Twitter’s responsibility when he says that “accounts like Jones’ can often sensationalize issues and spread unsubstantiated rumors, so it’s critical [that] journalists document, validate and refute such information directly so people can form their own opinions. This is what serves the public conversation best.”
This is an abdication of leadership and responsibility on his part. Internet companies have for too long asserted that they are simply plumbers, running value-neutral pipes to convey content. Like Dorsey, they contend that it is for others to determine the truth of what runs through those pipes and to refute falsehoods. These assertions are no longer viable as the speed and reach of the Internet makes old ways of combating disinformation obsolete. In the last decade, companies like Twitter, Facebook and Google have amassed great power and influence. With that influence comes greater responsibility.