San Francisco, renowned these days as a hub of technology, is about to be at the forefront of curbing its potential abuses: The city is now on track to be the first municipality in the United States to ban use of facial recognition technology by the city government. On Monday, the “Stop Secret Surveillance Ordinance” passed a committee vote. It will head to the San Francisco Board of Supervisors for a final vote on May 14.
Beyond prohibiting face surveillance, the bill also requires all other types of surveillance technologies—like automatic license plate readers, predictive policing software, and cell phone surveillance towers—to only be adopted by city agencies following a public notice and vote by the Board of Supervisors. The bill also requires clear policies for how surveillance technologies will be used by the city government.
It may feel like you’re visiting the future every time you access your iPhone X by staring at it, but facial recognition is truly everywhere already: The technology is busy at work at Facebook, identifying who you’re smiling next to at a party. It’s at the entry and exit points in international airports. It’s in sport stadiums, studying whether you’re having a good time. Facial recognition is even at the mall, deployed to recognize shoplifters. While the federal government has gone all-in on face ID—the FBI has a database with more than 400 million faces in it—advocates and concerned San Franciscans have been pushing back at the city level, citing worries over how surveillance tools could be used to profile communities already at risk of overpolicing. The result of their activism is the bill that advanced on Monday. A similar proposal to ban the use of facial recognition across the bay in Oakland, where public approval of new surveillance tech is already required, will be debated later this month.
Facial recognition is as alluring to law enforcement as it is flawed in its application, especially when it comes to identifying darker-skinned people. An MIT study released earlier this year found that Amazon’s facial-recognition system, Rekognition, misidentified darker-skinned women as men 31 percent of the time, yet made no mistakes for lighter-skinned men. The potential injustice here is obvious: If police decide to approach someone based on data retrieved from facial recognition software and the software is wrong, the result could be misapplied use of force—not to mention a case where existing biases in policing are reinforced by biased technology. The databases that are traditionally fed to facial recognition systems in order to match a recent photo to someone’s identity are commonly linked with mugshot databases.
According to a letter to the Board of Supervisors from groups supporting the ordinance (including the American Civil Liberties Union of Northern California and the Council on American-Islamic Relations), “because mugshot databases reflect historical over-policing of communities of color, facial recognition ‘matching’ databases are likely disproportionately made up of people of color.” Despite these concerns, Amazon recently tested its technology with police departments in Orlando, Florida, and Washington City, Oregon. It’s unclear how widespread facial recognition already is among police, since many departments across the country aren’t required to disclose when they adopt new surveillance technologies. If San Francisco supervisors approve this ordinance next week, though, at least that city’s police won’t be able to use such software.
There was some pushback from attendees of Monday’s meeting. “I think we’re cautiously optimistic that it will pass” the final vote, Tracy Rosenberg, the director of Media Alliance, a Bay Area group that works on technology justice issues, told me in an interview. She said a group of about 20 people connected to small businesses showed up to express concerns that “they wouldn’t be able to give video from their private surveillance camera to be able to catch shoplifters,” Rosenberg said. The board members weren’t persuaded, since, according to Rosenberg, there are many ways that police have been analyzing private camera video for decades, like by finding a car’s license plate captured in a video at the scene of a crime, without the help of face-ID databases.
Restricting the use of prejudicial policing and surveillance technologies is monumental—the last thing anyone wants is for racism to be baked into the technologies police use. The San Francisco ordinance could be used as a model for other major American cities to adopt similar policies. It also shows that people can have a say in the technologies that are used by the police to surveil their communities. The local activism to pass proposals like the one now on track to pass in San Francisco should be a sound-off to lawmakers across the country: People care about their privacy and have rightful reservations about giving police carte blanche to use surveillance technology. The new law won’t take away the sidewalk-facing cameras that are increasingly tough to miss around San Francisco, but it will make their vision a little cloudier.