How Microsoft is Sparking a Crucial Conversation on Facial Recognition Technology

This morning Microsoft President Brad Smith posted an essay on the company’s blog raising important questions about the human rights challenges relating to facial recognition technology. Microsoft, and in particular, Smith, have led the tech industry in addressing human rights issues that inevitably grow from the spreading use of emerging technologies. As Smith points out, these new technological capacities are often a force for good, but are also subject to manipulation and can cause great harm. What is clear is that these new technologies are now part of our lives and will play an ever-greater role in the future. Smith rightly focuses on vexing challenges relating to governance of facial recognition technologies, a rapidly evolving area which requires new governance models in which both governments and companies assume greater responsibilities.

Smith’s blog stresses the need for greater governmental engagement and oversight. He correctly argues that governments should develop appropriate laws and regulatory models that protect privacy, account for bias and prevent misapplication of technologies like facial recognition. But we must be mindful that too often governments themselves misuse these technologies, especially in the areas of security and law enforcement. In these politically polarized times in the U.S., he calls for creation of an independent commission of experts to help frame the agenda and make informed recommendations.  Though not a panacea, this type of independent expert engagement will be critically important.

Microsoft acknowledges that tech companies need to do more as well. As one example he cites the bias in current technologies, which are more accurate in identifying white men, than either women or people of color. This implies the fact that most of those who develop code for these services are themselves white men. As Smith rightly states, it’s incumbent on the companies to correct this bias by hiring a more diverse workforce that reflects the communities they serve.

But Smith could have gone further in elaborating on other steps companies like Microsoft need to take individually and collectively to address the human rights issues related to facial recognition technology. On issues like this, companies should develop clear industry standards and metrics, consistent with human rights principles. It is not enough simply to say that they are following broad aspirational principles. Companies need to develop specific industry standards that are negotiated with other key stakeholders. These standards must be rooted in core international human rights norms relating to privacy and protections against arbitrary state actions in the name of law enforcement or national security. The standards and metrics—and company efforts to abide by them—should be transparent. And importantly, all of this activity needs to be subject to independent assessment and accountability.

Microsoft deserves praise for taking the initiative in the discussion about how to regulate facial recognition technology. Now the company, and its rivals, must do the difficult work of instituting industry standards, even as government should fulfill its own corresponding responsibility to assure the proper use of the technology.