Future Tense

No Safe Haven for Victims of Digital Abuse

Security isn’t just a technical problem. It’s a social one.

A man grabs a woman's wrist, making her drop her smartphone.
Photo illustration by Slate. Photos by Thinkstock.

In December, Edward Snowden unveiled a new app called Haven, which turns your Android phone into a monitoring device to detect and record activity. Snowden has pitched Haven as a safeguard against so-called evil maid attacks, in which an adversary snoops through your digital devices or installs trackers on them when you’re not around. In interviews, Snowden was clear that one group he thought might use Haven was victims of intimate partner violence, who could use it to record abusers tampering with their devices.

You can’t imagine my excitement when I heard that the world’s best-known anti-surveillance advocate was thinking about digital tools to fight intimate partner violence. Over the past year, I’ve been working with a team of researchers at Cornell and New York University to understand the role of digital technology in this context. We’ve been doing this primarily through interviews with survivors and professionals who work in this field (case managers, social workers, attorneys, and others). What we’ve discovered in our research is that digital abuse of intimate partners is both more mundane and more complicated than we might think. It’s mundane in that many forms of digital abuse require little to no sophistication and are carried out using everyday devices and services: social media platforms, find-my-friends apps, cellphone family plans. Abusers aren’t hackers: Though some do install surreptitious “spouseware” to monitor their victims without consent, it’s much more common to abuse victims digitally in ways that don’t require any high-tech skill. But at the same time, digital intimate partner abuse is incredibly hard to fight, because the relationship between abuser and victim is socially complex. Abusers have different kinds of access to and knowledge about their victims than the privacy threats we often think about. In this way, intimate partner violence upends the way we typically think about how to protect digital privacy and security.

When you learn that your privacy has been compromised, the common advice is to prevent additional access—delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages—but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials—like passwords and security questions—are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone—or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life—where you were born, what your first job was, the name of your pet.

And some types of intimate partner abuse don’t require account access at all. A number of respondents in our study described how abusers harass and threaten them using digital devices; spread sensitive images or information about them to family and friends; and impersonate them online (for instance, placing ads online claiming that the victim is a prostitute and providing her address). All of these digital attacks can be carried out without any access to a victim’s devices or accounts, but can be tremendously harmful and disruptive to victims’ lives.

Some abusers did install spyware on victims’ devices. But just as commonly, they used “legitimate” monitoring apps—like Find My Friends, parental control apps, and theft trackers—to keep track of victims’ location and communications. These tools (which we’ve been calling dual-use apps) can be readily found on app stores and repurposed for intimate partner abuse—and victims may not know that they are configured to share so much information with abusers.

All of these forms of abuse show that digital abuse in relationships doesn’t require a lot of tech savvy. Most of these attacks can be carried out easily, without any special skills or access. What’s difficult about protecting the privacy and security of victims isn’t technical complexity; it’s that the relationship between the abuser and the victim involves social, physical, financial, and emotional ties, and that digital abuse is deeply intertwined with all of these. Because of this, many of the ways we often think about privacy and security fall short in this context.

This is why we need to think extremely carefully in designing technologies that are meant to aid survivors of intimate partner violence. Rather than designing elaborate tools to counter sophisticated attackers, we should focus more energy on the ways these abuses really happen. The people who make devices and platforms should explicitly consider intimate threat scenarios in their design processes—a process best achieved by partnering with experts in the field. (We’ve already seen this happen in some cases. Facebook, for example, worked with the National Network to End Domestic Violence to design a safety guide for abuse survivors.) Social media platforms are already making some efforts to quash some forms of intimate digital abuse (most notably, revenge porn)—but big challenges remain. Tech companies have been notoriously slow to implement effective anti-harassment policies and abuse reporting tools—and harassment from intimate partners is likely to be especially difficult to mitigate (since they don’t depend on identifiable mechanisms like bots, and abusive content might not seem immediately abusive to an out-of-context observer). This is why well-meaning tools like Haven might not ultimately be the most effective way to fight digital abuse. It won’t deter abusers using social media to stalk or harass victims; it won’t foreclose an abuser’s access to family plans or digital accounts. And turning the tables on the abuser by secretly recording him might even lead to escalation of the abuse in other ways. This kind of abuse is far more socially complicated than some unknown adversary phishing for your credit card information. There are many ways an intimate abuser can use technology to control a victim, and much less clarity about what to do when such abuse is discovered.

The complications of digital abuse within relationships should lead us to rethink our focus in working toward privacy and safety for vulnerable groups. Protecting security isn’t just a technical problem. It’s a social one. If our dominant mental model of a security threat is a shadowy hacker or an evil maid from a Bond movie—and the tools we build reflect that thinking—we run the risk of ignoring the very real security problems that thousands of people face every day, in their own homes.

Karen Levy is an assistant professor of information science at Cornell University. She researches the social and ethical dimensions of data collection.