What if you walked into a bar and everybody knew your name—except you’d never been there before?
A couple of weeks ago, we were introduced to Facedeals, which integrates Facebook’s APIs with facial recognition technology. When you enter a store, restaurant, or bar that uses Facedeals, your mug will be scanned so that you can be offered special deals and get automatically checked in to the location. “Creepy,” tech sites RedOrbit and TechCrunch both labeled it. That’s not surprising.
Creepy is the go-to term for broadcasting how technology unsettles us. Time and time again we’re asked to think in binary terms and identify a device or app either as good or its polar opposite, creepy. Although we’re often led to believe that creepy is an emotional response to things going horribly awry, our creepy radar isn’t nearly as reliable as Peter Parker’s danger determining spider sense.
Creepy is everywhere. (I must admit that I’ve abused the word, too. Sorry!)
We give the creepy stamp of disapproval to: digital holograms of deceased stars like Tupac and Elvis; our so-called addiction to technology; seeing too much information displayed on social networking sites; predictive technologies that infer where we’re going and what we’ll do when we get there; and behaviorally targeted advertising that displays heightened awareness of personal detail. The list goes on and on.
Want to really creep someone out? Just discuss tracking technologies that reveal who can be found where. Eyebrows were raised over Alohar (which can use speed readings to determine if you’re driving or walking down a street,) and SceneTap’s facial detection cameras (which can identify how many men and women are at a bar), and so much uproar occurred over Find Friends Nearby that Facebook pulled the app.
Given the pervasive allergy to creepy tech, even among millennials, engineers are trying to penetrate the personal locator market by developing self-proclaimed “noncreepy” apps like Roundpop. Articles now have titles like, “Ambient social networking apps may need to overcome ‘creepy’ label to go mainstream.”
Beyond being pervasive techno-talk, what does the experience of feeling creeped out really reveal? Does it point to anything important? Or, should we see it as a term used to end dialog, much in the way that religion can be a conversation stopper?
Creepiness can be a powerful feeling, an overwhelming sense of uneasiness. I can’t help but associate the term with negative religious overtones. After all, who leads Adam and Eve astray in Genesis? A snake—a slithering reptile that creeps around on its belly!
Still, creepiness isn’t necessarily a sign that something is amiss. As the history of technology shows, sometimes feelings are out of sync with reasonable responses. Louis CK strikes comedic gold with this point in his polemic against airplane passengers whose grandiose sense of entitlement led them to feel profound disappointment when a glitch knocked out their state-of-the-art, high-speed Internet connection.
On the flipside, time and time again our feelings convey culturally laden fears about innovation. Consider early experiences of traveling by train.
In Technology: Art, Fairground, and Theatre, Dutch philosopher Petran Kockelkoren notes that when people first started traveling by rail, new passengers experienced symptoms of “train sickness” that today we would deem bizarre. In the 1860s, for example, complaints of spinal damage caused by sitting on a train reached “epic proportions” in England, where some sufferers demanded compensation. The United States and Germany, too, hosted cases of train sickness. But decades later the epidemic died down and “disappeared from medical discourse, almost without a trace.” At the height of train sickness mania, people didn’t just complain of physical maladies (including reports of eye infections, diminished vision, miscarriages, urinary tract blockages, and hemorrhages). Psychiatric claims proliferated, too, including allegations that the train’s rapid movements produced “mental disturbances.”
The train example is but one of many relevant cases where negative feelings about technology convey bias. The history of conservative reactions to medical progress is especially informative. For example, after Christiaan Barnard performed the first human-to-human heart transplant at Groote Schuur Hospital in Cape Town, South Africa, on Dec. 3, 1967, critics expressed their uneasiness by condemning the surgery as a problematic case of doctors “playing God.” That anxiety has passed, and today the practice is widely accepted, while the enhancement debates—which might reflect very different sensibilities in 45 years—have shifted to extreme transhumanism, which may give us “animated and programmable LED tattoos connected to your brain” and a nanobot fueled “Enhancement Olympics.”
But what about cases where feeling creeped out isn’t a reactive or entitled response, but actually signals something truly is off-base? Then, we confront two sticking points. The first that sometimes we’re not sure which issues should be discussed, and this makes it hard to determine why an argument motivated by creepy feelings should or shouldn’t be considered persuasive. When the Girls Around Me app—which displayed photos, names, and the physical location of users—was released, critics immediately condemned it for being creepy, connecting their concerns to privacy matters. It took a sociologist like Nathan Jurgenson to say that an important issue was being overlooked: sexism.
The second sticking point is that creepy can concern possibilities rather than actualities. Calling something creepy can be a way of saying, “There’s no immediate problem, but I can foresee ways in which things might go wrong in the future.” Given the complexity and uncertainty involved with predicting the future in an ever-changing technological landscape, good arguments can be much harder to come by than using creepy as a rhetorically resonant way of placing a bet on forthcoming disaster.
In instances where data collection creeps someone out, that person might think, “Given the history of compromised security, the ease by which information can move from one platform to another, and the vested interests in using personal information for control and profit, there’s good reason to keep a close eye on things.” This issue came up when critics depicted Target’s customized advertising as creepy after a store’s mailer filled with maternity items essentially identified a teenager as pregnant before her parents knew. Although some concern was directed at whether Target uses information in a transparent way, I suspect greater anxiety concerned a sense of unknown—an inability to determine what steps retailers “take to protect your identity or to minimize the accidental release of information.”
Likewise, the creepy stigma given to facial detection technology (which can register a viewer’s gender and age) being developed for the next generation of televisions largely arises out of concern that modification will fuel a new breed of tailored advertising. What we don’t know is exactly how the ads will be configured, how accurate the information scanning will be, and what kind of safeguards will protect consumers against having their information misused. It thus is easier to be worried than prudent.
Although appeals to creepy can focus us on concerns that deserve attention, we should be sensitive to the dangers of status quo bias, and wary of sensationalized shorthand that short-circuits difficult analysis. Objectors to new technologies need better reasoning than, “I don’t know why—it’s just creepy, all right!?!”
This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.