Future Tense

A Pause on Amazon’s Police Partnerships Is Not Enough

An Amazon logo bookended by bright lights.
An Amazon logo in New Delhi on Jan. 15. Sajjad Hussain/Getty Images

On Wednesday, in a brief blog post, Amazon made a surprising announcement: that it would implement a one-year moratorium on police use of its facial recognition service, Rekognition. The post did not mention the furious nationwide demand for reform in response to the killings of George Floyd, Breonna Taylor, and too many other Black people. But it did cite developments “in recent days” indicating that Congress seemed prepared to implement “stronger regulations to govern the ethical use of facial recognition technology”—regulations that Amazon claims to be advocating for and ready to help shape in the coming year.

But Amazon’s sudden commitment to ostensibly transformative reform should be taken with a grain of salt hefty enough to unseat a Confederate monument from its rock-solid base. A pause on police partnerships isn’t enough. Americans won’t receive the privacy and civil rights protections they need because a company like Amazon decides to give them to us. We need those protections guaranteed by meaningful legislation and regulation, with functional enforcement mechanisms—legislation and regulation that have not been watered down by Amazon. But the fact that a company as powerful, canny, and obdurate as Amazon feels the need to make us believe that it wants to grant us privacy and civil rights protections gives me hope: It means they’re losing.

While a number of Amazon’s products and services have long been criticized for fueling institutional racism, Rekognition has been the subject of particular focus. Local, state, and federal law enforcement agencies all over the country are using facial recognition programs—from Amazon and elsewhere—to identify people in photographs and video footage, almost always without their knowledge or consent. Their use of the technology is subject to few rules or meaningful oversight, despite constant stories of deeply flawed systems often used incorrectly. Police’s embrace of facial recognition technology has disturbing implications for the safety and freedom of the people surveilled when false matches can mean an intrusive investigation, inclusion on a watch list, or even arrest. So-called “emotion analysis” capabilities, which Rekognition also offers (though it’s unclear how widely Rekognition’s emotion analysis tools are currently being used by law enforcement), purport to assess the emotional state of the subject and add additional potential for junk science to threaten privacy, erode due process, and put people’s lives at risk. Amazon has refused to disclose how many law enforcement departments are using Rekognition, but its connected doorbell company Ring is being used by 1,300 law enforcement agencies and counting.

To make matters worse, studies, including a prominent study on Rekognition specifically, have demonstrated that facial recognition technology struggles to accurately identify and assess nonwhite faces, particularly Black faces. In the hands of law enforcement agencies using facial recognition to monitor crowds, identify possible criminal suspects, and (supposedly) evaluate a subject’s emotional state, a tool that struggles to accurately identify or assess Black faces exacerbates the institutional racism that already plagues American policing. Amazon is thus profiting handsomely from the practices that people all over the country (and abroad) have been demonstrating against.

Research scientists, privacy and civil rights advocates, policymakers from both parties, and even many of the company’s own shareholders and employees have lambasted Rekognition’s privacy violations, chilling effects on free speech, discriminatory harms, and threats to due process. In addition to facial recognition’s entrenched bias problems, it enables functionally unavoidable surveillance of people of all backgrounds that makes getting lost in a crowd, such as during a political protest, a thing of the past.

Recent calls for anti-racist reform come on the heels of a wave of anger toward exploitative tech companies, accompanying a crescendo of support for regulating and banning (temporarily or permanently) the use of facial recognition. In response, Amazon has poured millions of dollars into lobbying state legislatures and Congress in support of weak facial recognition and privacy laws and against strong ones. Just as Facebook, Google, AT&T, and their mouthpieces have attempted to burnish their privacy credentials by calling for privacy laws that would ossify an exploitative status quo, Amazon has seen the writing on the wall.

Even before the George Floyd protests, Amazon’s competitors were shifting their own stances on the tech. In February 2019, Microsoft called for facial recognition regulation, and earlier this year, Google CEO Sundar Pichai publicly supported the notion of a temporary ban. Then came this week. IBM announced Monday that it would no longer offer “general purpose” facial recognition or analysis technology. Amazon’s moratorium came Wednesday. On Thursday, Microsoft followed suit with its own announcement that it will not sell facial recognition technology to law enforcement until there is a federal law “grounded in human rights.”

Tech companies embracing regulation may sound promising, and in some ways it is— but I have no confidence that Microsoft and I share the same definition of what it means for a law to be “grounded in human rights,” or that Amazon’s definition of “ethical” regulation is one that would meaningfully curtail its ability to spy on people. Soon after Microsoft announced support for regulation, for instance, it came out against Washington state legislation about facial recognition technology. The companies want to pass porous federal rules that will allow them to deflect criticism while functionally allowing their business models to operate unchanged, all while preempting the possibility of stronger state protections. Such ineffective laws might limit facial tracking (the ability for systems to follow your face from one place to another) but still allow systems to identify you without limitations. They might only require a warrant for a high-threshold category of searches, such as tracking a person’s whereabouts for 72 hours or more. Or they could provide capacious exceptions for exigent circumstances or “serious crimes.” A law that solely applied to the use of facial recognition in police body cameras would be even less effective. And a law that focused solely on policing would neglect the considerable surveillance and fairness concerns stemming from its deployment by non-law-enforcement agencies and private companies. None of these things will stop the worst abuses, but they would allow Amazon to declare the problems fixed.

In the coming year, Amazon may try to improve the bias problems in its facial recognition algorithms through whatever means possible. It might also try to cobble together some sort of self-regulatory code that it would (ostensibly) require law enforcement agencies that use Rekognition to adhere to. Either strategy or both would provide Amazon with a plausible defense for returning to law enforcement customers even absent new federal protections. Moreover, that the pause does not include similarly dangerous non-law-enforcement uses of facial recognition technology, and that we don’t know whether Amazon will be applying its pause to its Ring partnerships and to its work with Immigrations and Customs Enforcement or how it plans to enforce the temporary ban are good reminders that civil rights victories cannot be defined on corporate terms. Amazon might also conclude that the reputational laurels of abandoning the law enforcement sector are sufficiently worth garnering given the tides turning against it as a tech company, as a provider of racist technology to police, and as a facial recognition vendor, particularly as criticisms of its labor and competition practices also persist and mount.

If I sound skeptical of Amazon’s commitment to privacy and civil rights, it’s because I am. But while the key details and ultimate effects of Amazon’s announcement remain to be seen, it’s still a heartening indication of how much public opinion has been pushed by tireless privacy and civil rights advocates. It’s akin to Mitt Romney joining protesters and publicly stating that Black lives matter. Romney isn’t going to vote to defund police departments anytime soon, but his participation shows how radically consensus has moved, and illustrates softened ground for key reforms. And implicit in Amazon’s announcement to pause law enforcement partnerships in the wake of the protests is the admission that it is complicit in the enthusiastic disregard for Black lives by law enforcement that the protesters are trying to eradicate.

The remarkable concessions won by protesters are a forceful reminder that the complacency of racist institutions is not enough to maintain the shameful status quo those institutions assumed was immutable. Racist violence by law enforcement is not acceptable, and it is not inevitable. Nor is inescapable surveillance.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.