The tiny, shelf-level security camera at the grocery store is watching you. It’s recording your height, gender, race, and approximate age, as well as the brand of orange juice you just placed in your shopping cart. Oh, and it also knows exactly who you are.
This is the kind of eerie scenario—however hypothetical—that flickers to mind when many people think about facial recognition technology, the likes of which famously featured in Tom Cruise’s Minority Report as an ultraintrusive device for customized advertising. Though we’re nowhere near the world described by that film, the rapid advent of facial recognition software—programs that “recognize” a person by using visual data to turn his or her face into a unique mathematical space—into daily life has brought out some of our deepest anxieties about privacy and control. Unease was rampant, for example, when British grocery chain Tesco installed cameras to scan customers’ faces in 2013. The FBI’s Next Generation Identification system, launched in 2014, stoked fears about a totalitarian dystopia. Just recently, Microsoft’s new Windows operating system produced a few squirms for featuring a login program that can supposedly tell identical twins apart.
While people have been uncomfortable in the past about other forms of biometric identification, such as fingerprint scanners, something about facial recognition prompts an extraordinary level of panic. In headlines and conversations alike, the word creepy is thrown around without a second thought. But we give our names and identities away online all the time—so what’s so deeply troubling about revealing our faces?
One very rational source of wariness about the technology is, of course, facial recognition’s complicated relationship with privacy laws. Because federal laws don’t yet exist in the United States to govern this relatively new development, people in the majority of states can be recorded by the technology without giving their informed consent. Most other countries haven’t set rules around this, either. A colossal privacy issue is at hand—especially because cameras, unlike fingerprint pads, can collect high-stakes identifying data without ever revealing their presence in the room. That’s also alarming since the most advanced programs can now recognize a person’s mug from any angle. While one could always resort to drastic measures to hide from facial recognition, the idea of donning a mask to go out in public obviously doesn’t have mass appeal. Governments need to adequately address these privacy gaps, and we’re right to remain suspicious of the technology as long as it remains legally unchecked.
But let’s say companies and privacy advocates come to agreement on policies that would keep personal data secure and give people control of how their faces are tracked. It boils down to one thing: Facial recognition simply creeps people out.
Science fiction and media hype are party to blame for our apprehension. But it’s also a fascinating matter of psychology—one that has to do with the basic human instinct to run from prying eyes. Brian Mennecke, an associate professor at Iowa State University who conducts consumer research, explains that facial recognition technology tends to give people the unpleasant feeling that they’re being singled out and cornered. It’s not the idea of being seen that’s unnerving, but rather the fear of then being remembered and followed, he says.
These fears, though, are largely irrational. Proof of this is the fact that we’re intensely alarmed by some forms of facial recognition and not others. For instance: Mennecke says that most of us are much more comfortable with unknown Facebook employees in California looking at our faces than security guards keeping tabs on them from control rooms in malls. That’s because computers depersonalize and distance experiences, and the lack of physical proximity with those on the other side of the camera makes us far more trusting. Mennecke adds that, online, there’s also a “we’re in a group, and it’s not about me individually” mentality. In the flesh, people become much more conscious of their presence as individuals—and as potential targets.
If, however, we’re able to sort out the privacy concerns—Illinois and Texas have already propped up laws, and national legislation may be forthcoming—then the only thing preventing the technology’s widespread implementation is our squeamishness. We fear the worst-case scenario: a future like Yevgeny Zamyatin’s We or Dave Eggers’ The Circle, in which full visibility comes to dominate our private lives. But facial recognition, under proper legal supervision, is no more exposing than our voluntary online activity every day. And as “creepy” as the advanced technology may be, we shouldn’t deny the vast benefits that its popular use could bring.
First, there’s the obvious: Law enforcement officers all over the world would be able to easily pick out individuals from crime scene footage. Other benefits are subtler. Small businesses can use facial recognition to keep track of demographics and boost their sales accordingly. Banks, by placing software-enabled cameras in their offices, can pin down criminals who try to open multiple accounts under different names. Access-based security systems can improve dramatically, since a face is much harder to hijack than a password. Retail stores can advertise special promotions for recurring customers.
Here’s how that would work: Upon entry into a store, a camera would recognize your face, and a screen would conveniently offer you suggestions and discounts tailored to your regular purchases. And since retailers recognize the apprehension people have about the technology, they’re likely to offer this to customers as an opt-in program that comes with additional loyalty-type perks, Mennecke says.
Churchix, a handy program that’s currently used in 40 churches to track the congregants who attend the most events and are most likely to participate in fundraisers, has been deemed uncomfortably invasive. But religious organizations around the country these days are struggling to get donations; the software could help them thrive again. Dozens of other community uses for the software exist, says Churchix CEO Moshe Greenshpan, adding that several schools have expressed interest in buying it to more efficiently take attendance in classrooms. (Yet the software is being discussed as spooky and intrusive, rather than a valuable, time-saving tool.)
There are other, more personalized applications: The Animal Humane Society in Minnesota’s Twin Cities region recently introduced an app that uses facial recognition to help owners find their lost pets. A dating site can match couples based on their physical compatibility. Pawan Sinha, a professor of computational neuroscience at MIT, notes that the technology can even improve the lives of people with prosopagnosia, a cognitive disorder that makes it difficult to visually interpret faces.
All of this is not to say that facial recognition software is without problems or dangers. Like any burgeoning technology, facial recognition must reckon with complex privacy laws, and legislators have the daunting task of making sure its applications remain beneficial instead of exploitative.
But from a purely technological standpoint, a facial recognition dystopia is still wildly off. Though face-recognizing technology aims to mimic the visual processing of the brain, the “reality of the difference between what the brain can do and what computer vision systems can do is quite starkly different,” Sinha assures. He adds that the technology is still hampered by problems of image quality, distance, and visual degradation, which our brains easily transcend. Because our understanding of the brain is incomplete, progress in the facial recognition field is much slower than most people think.
The stigma around facial recognition makes it difficult for this progress to be accepted—but it’s clear, from its widespread benefits, that it should be. Asked why he thinks facial recognition has such as “creepy” reputation, Greenshpan finds it hard to pin down in words. “It’s not really about technology,” he muses. “It’s mainly about that feeling. I don’t see any misuse of data or anything like that, but people get this feeling.” He pauses again and sighs: “It’s just that feeling.”
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.