Can Users Opt Out of Facial Recognition Technology in the Physical World?

Facial-recognition technology is operated at Argus Soloutions in Sydney, Australia.

Photo by Ian Waldie/Getty Images

Facebook was widely criticized this summer after launching a feature that automatically identified and tagged users as they uploaded photos online.  Google avoided public backlash after introducing a similar feature earlier this month by letting users decide for themselves if they wanted to participate. As Evgeny Morozov describes in his “Future Tense” column today, this “opt-in” approach is savvy on social networks’ part, but the comfort consumers receive from opt-in policies is false:

While it’s certainly less coercive, any opt-in still makes the underlying technology—automated facial recognition, in this case—seem normal and acceptable. But no technology companies will acknowledge this.

And facial recognition isn’t just for social networks. It is moving into the physical world, where opting in or out is much more complicated, if not entirely impractical.

As of yet, facial recognition in real-time isn’t quite ready for prime time. The technology in commercial use today can mostly only detect faces rather than identify them. Basically, a camera can tell when a face comes into view, but it can’t attach a name or any other traceable data to that face. It can, however, determine general characteristics of a person in view, including gender, age group, and, in some cases, emotion.

Intel’s AIM Suite software is used in digital signs to tailor advertisements to passers-by—when someone identified as a woman in her 20s walks by, say, she’ll be confronted with a mascara ad, while a man in his 50s may be served a spot for a wristwatch. The program has other uses, too: Bars and nightclubs use a social networking company called SceneTap, which is powered by AIM Suite, to track the number of people, the male-to-female ratio, and the average age by gender. Users can use SceneTap’s smartphone app to decide which bar to visit on a night out.

Neither AIM Suite nor SceneTap stores images, they don’t attempt to attach an identity to a face, and the companies say they’re not interested in tracking individuals. The personal privacy concerns appear minimal, but they nevertheless raise critical questions that will only become more important as the technology improves to identify and track faces. At what point do people know they are being watched? Where can they find the privacy policy to learn what happens when they’re on camera? How can they opt out if they’re not comfortable with the technology?

At this early stage, the answers to those questions are not encouraging. At a Dec. 8 Federal Trade Commission workshop on commercial use of facial recognition technology, SceneTap’s chief strategy officer said bars that use the service place a decal in the front window by the entrance. The decal shows the app’s logo and URL, ostensibly so patrons know the service is in use and can go online to read the privacy policy. Anyone uncomfortable with the service, he said, can opt out—by leaving.

Designing an opt-out for the physical world is difficult. A camera in a physical location is going to capture people as they pass by no matter what. Panelists at the FTC’s workshop tossed around some ideas to deal with this, but all suggestions were problematic and superficial.

Some talked about placing a conspicuous notice near the cameras with a QR code linking to a privacy policy, but QR codes are regularly ignored (some argue they’re on the brink of death). Another suggestion was to create an opt-out database like the National Do Not Call Registry, but this would presumably require everyone opting out to submit their photos to a central database that gets accessed every time their face appears on camera. If this registry were government-run, it would probably deter people just as much as the facial recognition technology itself.

Companies using it say they’re acutely aware of personal privacy concerns. They generally recognize that a system that identifies and tracks people requires notice and consent, but at this point their policies are based on self-regulation. Ultimately, facial recognition technology has the potential to create new and useful services, but the perception of privacy invasion is a threat. If services aren’t perfectly clear about what they’re doing, as we saw with Facebook, customers and policymakers will turn against them.