Future Tense

“Creepiness” Is the Wrong Way to Think About Privacy

A pair of binoculars.
Glen Carrie/Unsplash

Slate has relationships with various online retailers. If you buy something through our links, Slate may earn an affiliate commission. We update links when possible, but note that deals can expire and all prices are subject to change. All prices were up to date at the time of publication.

From Why Privacy Matters by Neil Richards. Copyright © 2021 by Neil Richards and published by Oxford University Press. All rights reserved.

Imagine, for a moment, your reaction to a new practice or technology that collects or uses human information. Maybe it’s the city of Baltimore, which installed surveillance microphones on city buses and has authorized spy planes to fly over the city taking high-resolution video of everything on the ground on a dubious crime-prevention rationale. Maybe it’s your voice-activated smart television, listening to your conversations and enabled with “automatic content recognition” to automatically monitor and collect everything you watch. Or maybe it’s a networked sex toy that records your usage of it. What’s your natural reaction to these practices? If you’re like most people, you might think “Wow. That’s pretty creepy!”

Advertisement
Advertisement
Advertisement
Advertisement

Most discussions of privacy and new technologies run into accusations of creepiness at some point. Consider, for example, surveillance-based advertising, Facebook’s experiments to control the emotions of its users, NSA surveillance, black-box data recorders in cars, eavesdropping “smart” Barbie dolls, the Internet of Things, drones, Google  scanning your Gmail or accessing vast amounts of health care data, the use of predictive analytics by employers, Zoom’s attention-tracking feature, and police use of the DNA databases and audio captured from smart speakers to investigate crimes. Each of these practices have been labeled “creepy” at one time or another.

Given how common creepiness has become as a way of thinking about privacy, it should be no surprise that creepiness has entered privacy law. Consider, for example, the Fourth Amendment’s famous “reasonable expectation of privacy” test. This idea has influenced privacy law around the world, and it has also entered popular understandings of what privacy is and why privacy matters. Under the test, developed from Justice John Marshall Harlan’s concurrence in the case of Katz v. United States (1967), the Fourth Amendment applies when a person manifests (1) a subjective expectation of   privacy that (2) society is prepared to recognize as objectively reasonable. But by resting the constitutional doctrine on what people expect, and triggering the inquiry on when those expectations are violated, the “reasonable expectation of privacy test” is at bottom a theory of creepiness.

Advertisement
Advertisement
Advertisement

Despite the dominance of creepiness as a way of thinking about privacy, it’s a trap, and we must resist it. Like the very best traps, creepiness is a seductive one, luring us in so gracefully that we don’t realize we’ve been ensnared. Creepiness distracts us from the real issues at stake in privacy, and it has three principal defects.

Advertisement
Advertisement

First, creepiness is overinclusive. As a test for threats to privacy, creepiness proves too much. Lots of new technologies that might at first appear viscerally creepy will turn out to be unproblematic. Some will even turn out to be beneficial. Philosopher Evan Selinger reminds us that early steam train passengers were not merely creeped out by the new iron horses; they were physically terrified. Early passengers fainted and complained of serious maladies like spinal damage, urinary tract blockage, and eye infections, all from traveling at speeds that by today’s standards wouldn’t even count as speeding in a school zone. New technologies—even useful ones—frequently inspire visceral terror and unease. Facebook’s News Feed feature creeped out many users when it was first introduced because users were not accustomed to having all their information aggregated in one place for easy consumption. Now the feature is considered fundamental both to the company’s success and to the social awareness of many users of  its network.

Advertisement
Advertisement
Advertisement

Second, creepiness is underinclusive. New information practices that we don’t understand fully, or highly invasive practices of which we are unaware, may never seem creepy, but they can still menace values we care about. Take, for example, surveillance of which we are unaware, like the mass tracking of phone calls by the NSA prior to the Snowden revelations or the use of secret algorithms to score our lives. Such practices may unconstitutionally subject us to criminal or civil punishment (from jail time to designation on “no-fly” or “watch” lists), or they may overcharge us or deny us access to insurance or to economic opportunities (in the case of scoring by credit brokers or university admissions algorithms). Such practices may be illegal, inaccurate, or both, but if they operate behind layers of secrecy, we may never learn about them. And things we are unaware of are unlikely to trigger the creepiness reaction. The fact that creepiness is underinclusive is a real problem, because many of the most threatening and harmful information practices are invisible to ordinary people, such as the use of opaque black-box algorithms to sort (and then act on) citizens and consumers based on their race, gender, politics, vulnerability, or gullibility. This includes many of the most pernicious uses of data-based technologies, including those of digital redlining, fake news, and voter manipulation. If creepiness is our main test for privacy issues, its underinclusiveness represents a massive hole in its ability to do good work for us. Hidden exercises of information power can be just as dangerous as overt ones, if not more so.

A book cover says "Why Privacy Matters by Neil Richards."
Oxford University Press
Advertisement
Advertisement
Advertisement
Advertisement

Third, creepiness is both socially contingent and highly malleable. Creepiness rests on psychological reactions to social practices—reactions that are socially constructed, that can change over time, and that can be manipulated. A pervasive threat to privacy or our civil liberties can be made less creepy as we become conditioned to it. Think, here, about metal detectors at airports or at the entrances of professional sports events and public schools. Such a threat may remain equally serious but become normalized as we fit it into our understanding of the world in which we have to operate, alongside other problems like police corruption, sexism, or drunk drivers. The internet advertising industry, which relies on detailed surveillance of individual web-surfing to target ads, has fallen into this category. It’s easy to forget that the internet of the late 1990s was an anarchic, largely untrackable domain of the weird and the private. (Undeniably, it also included a lot of pictures of naked people.) By contrast, the corporate, commercial, mobile app–driven internet of the early 2020s represents probably the most highly surveilled environment in the history of humanity. (Equally undeniably, it too includes a lot of higher-resolution photos and streaming videos of naked people.)

Advertisement
Advertisement
Advertisement

When it comes to privacy—particularly when it comes to data-based surveillance and manipulation of consumers—creepiness is a trap. What’s really important is not whether the creepy reaction is being triggered. Instead, what really matters is power—the substance of what’s going on whether we as consumers understand it or not, whether we as consumers notice it or not, or whether we as consumers are freaked out by it or not. Thinking of privacy in terms of creepiness is not only a bad way of gauging whether there’s a real privacy issue; it also confuses us about what’s really at stake, and it further enables the exercise of power by those who control our data. Given these limitations, when we talk about privacy, we must do much better than talking in terms of creepiness.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement