Last week, Eric Schmidt, executive chairman of Alphabet, Google’s parent company, caused a stir by suggesting publicly that the search giant would seek to “de-rank” Russia’s main international television service and a prominent Russian news website. RT, the television service, and Sputnik, the website, have been implicated by U.S. intelligence agencies in the Kremlin’s far-reaching attempt to meddle in the 2016 U.S. presidential election.
It’s not clear what exactly Google intends to do. “De-ranking” lacks a technical definition, and in his remarks at the Halifax International Security Forum, Schmidt emphasized that the search platform doesn’t intend to block all access to RT or Sputnik. Instead, it’s likely that Schmidt meant that Google would marginalize the propagandistic Russian outlets by pushing them down the list of search results.
This is not an unprecedented intervention. In 2016, Google announced that webpages where content was not easily accessible to users (for example, because of intrusive pop-ups) may not rank as highly going forward. De-ranking in the context of disinformation would be new.
As a general matter, it’s promising that Google, and other platforms such as Facebook and Twitter, are devoting time and resources to the difficult challenge of balancing freedom of speech and the real harms caused by political disinformation. Twitter recently banned all advertising by RT and Sputnik, citing their role in the election interference.
In our recent report, entitled “Harmful Content,” the Center for Business and Human Rights addressed Russian disinformation, arguing that the platform companies have a responsibility to refine their algorithms so they can take into account “new and more precise indicators of the credibility of content.” Google seems like it’s moving in the right direction. Discussing another form of Russian online interference—the activities of Russian “troll farms,” which produce large volumes of false social media posts—Schmidt said at the Halifax conference that Google needs to “make sure as the other side gets more automated, we also are more automated.”
Another example of constructive new thinking comes from the Trust Project, based at Santa Clara University’s Markkula Center for Applied Ethics. The project has proposed a set of “indicators” that will allow Internet users to assess the quality and reliability of news. The indicators include the nature of an article’s original publisher; the background of the author; and whether the piece in question offers opinion, straight news, or advertiser-sponsored material. The platforms, including Google, have agreed to explore how to use the Trust Project indicators. They should do so with a sense of urgency.
Unsurprisingly, Schmidt’s comments provoked a furious reaction from the Russian outlets. Their coverage of the story attributed Google’s move to Schmidt’s connections to the U.S. Democratic Party, noting that he served as an advisor to former President Barack Obama and has been described by Hillary Clinton as a “longtime friend.” The Russian assertion of conspiracy theories seems more like an extension of the 2016 disinformation campaign than a serious response to Google.
Google’s thinking has evolved, Schmidt explained: “We started with a position that the American general view that ‘bad’ speech would be replaced by ‘good’ speech in a crowded network,” he said. “The problem in the last year is that this may not be true in certain situations, especially when you have a well-funded opponent who is trying to actively spread this information.” This is precisely the argument we made in our report.