Confronting Facebook's Growing Pains

On Wednesday, The New York Times published an in-depth account of Facebook’s poor corporate responses to political disinformation, hate speech and other harmful content on its platform. The Times described the company’s approach as “delay, deny and deflect.” A day later, Facebook issued a sharp rebuttal, insisting that it is making progress on these challenges, such as recent improvement in its efforts to take down hate speech.  This back and forth follows a pattern, where the company reacts to each new set of revelations, expresses contrition for not acting more quickly, and then offers new assurances that it now has the problem under control.

Even though the company has taken a number of important corrective actions—for example, banning military officials in Myanmar who had used the platform to inflame ethnic hatred—Facebook’s catch-us-and-we-will-fix-it routine is not sustainable.  The company’s brand reputation is now in serious jeopardy.  Facebook needs to take a look at three aspects of its core business model that are hindering progress. These and other aspects of the governance of social media companies are examined in a report from the Center for Business and Human Rights at NYU Stern School of Business.

First, Facebook and its fellow Internet giants, Google and Twitter, should accept a more significant role in regulating content on their sites.  For years, these companies have portrayed themselves as plumbers, running pipes, who aren’t in control of the content that flows through them. They routinely deflect responsibility, saying they are not like the editors of The New York Times, and they are not arbiters of the truth.  The reality is that the Internet companies are neither plumbers nor traditional news editors. They are something in between—and now need to help shape and enforce the rules governing this new paradigm.  Their resistance to acknowledging this obvious truth has been driven by company executives and lawyers. They fear that embracing a more expansive self-oversight role will endanger a legal-liability shield the Internet companies enjoy under the 1996 Communications Decency Act.  This anxiety about liability may help explain why, as reported by the Times, Facebook’s top management restrained the company’s former digital security chief, Alex Stamos, when he wanted to move more aggressively in early 2017 against Russian disinformation.

A second related change would involve the corporate structure that helps shape how critical issues are debated internally and decided. The Internet companies have built their successful platforms by creating an environment where engineering innovation is the coin of the realm. The main and often only constraint on this engineering model has been the  obligation to abide by the law. In areas where the law is clear—for example in barring child pornography—the companies have diligently complied. But aspects of the Russian government’s disinformation campaign may be entirely legal, even though they are wrong and damaging to our democracy. Internally, these companies have blurred or conflated responsibilities for legal compliance and broader public policy imperatives. Each company needs to recruit senior policy directors, reporting directly to the CEO, who focus on addressing online content that may be legal but is harmful and should not be promoted on these sites.

Finally, it is time for these companies to reexamine the business assumptions that drive their decision-making.  Facebook and Google are essentially advertising companies. Their very successful business models have been built around the massive traffic on their sites. Advertising rates are driven by metrics that reward greater user engagement, as measured by the number of views, clicks, likes, and shares. Various studies have demonstrated that a main driver of engagement is emotionally evocative content, and negative emotions trump positive emotions.  Among the top negative emotions gaining the greatest attention are anger and fear. The companies have built algorithms that serve up what we want to see, which often means content that prompts anger and fear. Bad actors like the Russians are exploiting this system, dumping massive amounts of harmful and often deliberately false information into the online ecosphere. This does not mean that the companies should abandon their advertising model. What is does call for is that they adjust the algorithms and take other active measures to reduce the prevalence of highly popular but harmful content.

It is not that Facebook wants to promote this poisonous material, but its system is designed in a manner that is allowing it to happen.  The burden now on these influential and powerful companies is to adjust their business models to better reflect our common values and help preserve our democratic discourse.  This is not a time for half-measures and reactive responses to journalistic exposes. It is time for Facebook, Google, and Twitter to take bold measures that assure that the Internet lives up to its potential and as the democratizing force we all want it to be.