Future Tense

How Amazon’s Moratorium on Facial Recognition Tech Is Different From IBM’s and Microsoft’s

Jeff Bezos gestures with both hands while sitting in front of a light-blue screen.
Amazon’s Jeff Bezos speaks at the 70th International Astronautical Congress in Washington on Oct. 22. Mandel Ngan/Getty Images

Just two weeks ago, facial recognition technology seemed unstoppable. At the beginning of this year, for instance, news reports cast a light on the secretive company Clearview AI, which scraped social media sites for photos to build a database of more than more than 3 billion photos, sold to law enforcement. Then came a sea change: On Monday, in a letter to Congress, IBM announced it would stop the sale of “general purpose” facial recognition software. On Wednesday, Amazon announced a one-year moratorium on police use of its Rekognition technology by law enforcement, inviting Congress to “put in place stronger regulations to govern the ethical use” of the technology. Amazon in its statement said that, “Congress appears ready to take on this challenge,” referring to the mounting pressure to make fundamental changes to U.S. law enforcement following the killing of George Floyd by the Minneapolis police, and law enforcement’s heavy-handed and violent response to the Black Lives Matter protests. And on Thursday, Microsoft joined the crowd, saying it will ban law enforcement from using its facial recognition technology.

For those of us who have long worked on technology and privacy, these are exciting developments. But they raise two fundamental questions. Are we indeed ready to have a conversation about racial and social justice in relation to technology? And what should the basis for this conversation—and potential regulation—be?

IBM’s decision has been met with cynicism. Critics emphasize that the company was not leading sales of facial recognition technology and that the small gap left by its exit will be filled by non-U.S. companies already quietly shipping tools to police departments. Furthermore, they have pointed out, shelving facial recognition to focus on cloud computing could ultimately help IBM’s stock. Other commentators argue that the damage has already been done, citing IBM’s previous contracts with law enforcement. Amazon’s announcement to halt police use of Rekognition for one year has been met with even more skepticism. Many civil liberties activists and organizations who have been challenging Amazon’s surveillance technologies for years argue that Amazon should extend its moratorium until Congress passes a law regulating facial recognition technology. In addition, Amazon’s announcement comes on the heels of growing pressure from its investors to limit or ban facial recognition technologies.

While some skepticism about these tech companies’ sudden change of heart is indeed justified, that doesn’t mean it should be downplayed. This is a step that goes beyond the earlier expressions of “support” with the protest movement by Big Tech, toward a breakthrough opportunity to start a long-overdue conversation about racial and social justice in relation to technology with the active participation of the companies that manufacture and sell so many of the products that are harmful to racialized and marginalized groups today.

If this conversation is to succeed and lead to tangible change, it needs to acknowledge the systemic issues that lie at the foundation of facial recognition and other technology not only reproducing, but amplifying the—racist, ableist, male-centric, etc.—power structures existing in our societies. A broad range of stakeholders needs to be part of the dialogue, and not just engineers and lawmakers: Those who are directly experiencing the shortcomings of this technology, including social scientists, civil society, and human rights experts, should be part of determining the standards that urgently need to be set. This includes exploring other avenues than simply divesting from technologies used to violate human rights. For example, tech companies could, as the Algorithmic Justice League suggested the other day, actively invest in the organizations and individuals working to bring light to how technologies reinforce and perpetuate power structures.

Many years of groundbreaking work has led to this moment. The conclusion that facial recognition has a race problem builds on the foundational research done by Joy Buolamwini, Timnit Gebru, and Deborah Raji, demonstrating that various commercial facial recognition technologies showed bias on the basis of gender and skin type. In the wake of the initial announcements from IBM and Amazon, most commentators failed to reference this crucial work done by black women researchers in bringing the racist problems of this type of technology to light. This erasure underlines how technology, just like the reporting on it, does not exist in a vacuum. All of it reflects the unequal power structures in our society, which the protesters are challenging right now.

There is a small but significant difference in the approach of IBM and Microsoft on the one hand and Amazon on the other hand. On the face of it, Amazon’s request for regulation seems more concrete than IBM’s call for a “national dialogue.” However, while IBM expressed its position with explicit reference to the nonviolation of “basic human rights and freedoms,” Amazon’s invitation to Congress refers only to the “ethical use” of it, leaving the human rights framework out of the picture. On Thursday, Microsoft took IBM’s human rights stance a step further by claiming it would not sell facial recognition technology unless there was “a national law in place, grounded in human rights” to govern it. This is important as the international human rights framework not only sets clear standards (rights), but it also specifies the duties and responsibilities that come with it, and there are existing mechanisms to enforce it. These are really important principles for protecting individuals against harm by powerful actors. There is a long-running debate on rights vs. ethics in regulating technology, with many arguing that companies prefer the latter because they do not want to subject themselves to a tangible, high level of accountability. It is refreshing that IBM is willing to take this step.

We live in a world where protesters are not only routinely surveilled, but in which data-driven tools are incorporated into public policy decision-making, such as who has access to housing or shelter, who is suspected of having committed welfare or benefits fraud, how long a defendant should be sentenced, and when social services should take measures against a family. Companies like IBM, Microsoft, and Amazon are building practices, policies, and tools in areas that traditionally fall within the exclusive remit of public bodies and government departments. Bodies and departments that are usually subject to laws that ensure they act lawfully, fairly, and compatibly with human rights. Yet, as of now, tech companies are not subject to the same mechanisms of accountability. Placing their work in a human rights frame is an important acknowledgment that what these companies are doing has an impact on legally enforceable rights, and that it also should be scrutinized as such.

The road to full accountability for tech companies will be long, and we will need to monitor IBM, Amazon, Microsoft, and others that will (hopefully) follow suit to see if they live up to their promises. But their decisions—particularly that of IBM, which came first—are still commendable. If ever there were momentum to make this into more than a public relations exercise and start having the racial and social justice conversations we need to have about technology, including in the law enforcement context, with all stakeholders, including companies, it is now.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.