A cogent and forceful argument for the government to regulate face-recognition technology was published on Friday—not by a legislator, pundit, or advocacy group, but by Microsoft.
In a lengthy blog post, the Seattle-based tech giant made the case that face recognition is too potent, and comes with too many risks, for the public to leave entirely in the hands of private companies such as itself. The technology, which uses machine-learning software to automatically identify people in photographs and video footage, is increasingly used by social networks and photo apps, and as a security measure on devices like iPhones. It’s also being used by a growing number of law enforcement agencies to help identify suspects in crimes such as the mass shooting at the Capital Gazette newspaper in Annapolis, Maryland, in June.
In the blog post, Microsoft President Brad Smith called for a government initiative to regulate the technology’s use, informed by a bipartisan expert commission. He wrote:
Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.
It’s a surprisingly strong stance, coming from a company that works on face recognition technology of its own. It comes after Microsoft took heavy criticism for a January blog post in which it touted its work on behalf of U.S. Immigration and Customs Enforcement. Though Microsoft mentioned face recognition technology in that post, the company clarified in its Friday blog post that its contract for ICE did not include anything related to face recognition. Rather, it provided the agency with productivity software such as email, calendar, messaging, and document management.
Microsoft’s post touched on several of the technology’s limitations, including research that shows it’s more accurate in identifying white males than it is in identifying women or people of color. This echoed an argument recently advanced by Brian Brackeen, the CEO of a face recognition company called Kairos, both in a TechCrunch op-ed and on Slate’s If Then podcast.
But even if face recognition could be made to treat different groups of people equally, it would still pose chilling privacy threats, Smith argued:
Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.
One way to read Microsoft’s statement is that it’s the result of a process of genuine introspection by a private company on behalf of society. Certainly the post itself is thoughtfully argued, and the call for regulation reads as earnest.
There are also more cynical ways to interpret the company’s motivations. Microsoft’s largest tech rivals—Apple, Google, Amazon, and Facebook—all use face recognition in various forms and are leaders in developing the technology. Microsoft may reckon that taking a stand for regulation would serve its interests better than continuing to compete on an unregulated playing field. It previously made a big push for online privacy at a time when its Bing search engine was struggling to compete with Google’s.
At the same time, Microsoft is working to distance itself from the controversy over its work for ICE, which infuriated liberals, including many of Microsoft’s own employees. Workers for each of the major tech companies have recently been organizing to fight their employers over products, contracts, and policies that they view as socially irresponsible. Google decided not to renew a Pentagon drone contract, and Amazon workers have pressured their CEO Jeff Bezos to drop work on face recognition contracts for law enforcement.
Microsoft’s blog post accepts that the company has a responsibility to make sure its products aren’t abused. But Smith makes a persuasive case that it should be the government, and not individual tech companies, that decides what uses of face recognition are appropriate. From his post. In essence, he’s asking the government to take away some of Microsoft’s power—something CEOs almost never do:
While we appreciate that some people today are calling for tech companies to make these decisions—and we recognize a clear need for our own exercise of responsibility, as discussed further below—we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.
Ultimately, it might not matter that much whether Microsoft’s motivations are purely altruistic, or partly self-serving. What matters is that the company may be helping to put face recognition legislation on the national agenda, while putting pressure on its rivals to take their own stands. Facebook and Google did not immediately respond to Slate’s requests for comment Friday.