Why Algorithms Alone Can’t Make The Internet Safe
October 4, 2018
Major Internet companies today find themselves in the cross-hairs of European regulators and facing increased public distrust in Europe and the U.S. One reason is that the full scope of harmful content online has become more evident. Until recently, Facebook, Google, and Twitter sought to downplay the magnitude of their problems in this regard, arguing that the amount of hate speech and political disinformation online was relatively tiny, a minor inconvenience when compared to the overall volume and value of Internet communications.
But now, as the heat rises, the Internet platforms have begun to acknowledge a measure of responsibility for the deleterious content that sits on their sites, a positive first step. But in responding, their first instinct is to revert to form, assuming that their engineers will create improved tools using artificial intelligence which will deal effectively with these challenges. Testifying about hate speech online before Congress in April, Facebook CEO Mark Zuckerberg reflected Silicon Valley’s reverence for machine-based solutions. “Over a five- to 10-year period,” he said, “we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”
In the coming days, researchers at the Aalto University in Finland, along with counterparts at the University of Padua in Italy, will present a new study to a workshop on Artificial Intelligence and Security. As part of their research, they successfully evaded seven different algorithms designed to block hate speech. They concluded that all of the existing algorithms used to detect hate speech are vulnerable to easy manipulation, contributing to, rather than solving, the problem. Their work is part of a project called Deception Detection via Text Analysis.
While the algorithms that Facebook and its competitors are developing are important, any serious effort to address these vexing problems must also include a parallel commitment to hire many more people. Human judgment and oversight of the Internet platforms is essential. Gradually, these companies have agreed to build human capacity to look for suspect ads, inauthentic accounts and other forms of harmful content.
Over the course of this year, Facebook has pledged to double the size of its staff assigned to address safety and security, building this unit up to 20,000 people. In Germany alone, where the government began enforcing an online hate speech law in January, the company reportedly now has about 1,200 people helping to assure Facebook’s compliance with the German statute. YouTube, part of Alphabet, the umbrella corporation that also includes Google, has promised to hire 25% more people to review content. Twitter, which is much smaller, has promised a 15% increase in staffing this year, resulting in 3,800 new employees, most of whom will be involved in what the company calls “improving the health of the platform.”
These are all positive steps, but each of these companies need to do more. In recent years, Google and Facebook have enjoyed enormous growth and rapidly increasing profits. These two companies hold a dominant market share in the rapidly expanding online advertising space, one that is sure to grow further. Given this financial success, and the risk that harmful content online presents to the reputations of these companies and to the public trust, their current staffing models are not commensurate with the need for many more employees to be hired to oversee the system.
In July, the NYU Stern Center for Business and Human Rights published a report focusing on one aspect of the problem: the circulation of deliberately false information online by agents of the Russian government. Over the last few years, including but not limited to our elections, Russian operatives have conducted a well-organized and well-funded campaign to promote political disinformation within the U.S. and other Western societies. This activity has aimed at sowing dissent and undermining our democracies.
The NYU Stern report, entitled “Combating Russian Disinformation,”recommended that each of the major Internet companies “create and staff Russian-focused teams that include specialists with Russian language skills and area expertise.” The report proposes that these teams be integrated into existing efforts to address disinformation or inauthentic sources but be “explicitly assigned to grapple with hostile Russian activities in all aspects of these businesses.” To date, none of the companies have agreed to adopt this commonsense suggestion, even as Russian efforts to erode our democracy are as serious an ongoing problem as ever. The time for the companies to act is now.