Future Tense

Trump’s Plan to Stop Violence Via Smartphone Tracking Isn’t Just a Massive Privacy Violation

It’s also highly unlikely to work.

Photo illustration of a silhouetted man holds a smartphone with a visible "signal" that is being picked up by the National Security Agency.
Photo illustration by Slate. Photo by domoyega/iStock/Getty Images Plus.

In the wake of the Dayton and El Paso shootings, President Donald Trump has been quick to blame mental illness as the culprit. There is no evidence that either of these shooters have mental illness, nor that mental illness is to blame for their violent outbursts. Research also suggests people with mental illness are more likely to be victims of violence than perpetrators. But none of this has stopped the president, nor has it stopped the Suzanne Wright Foundation, which has used Trump’s assumption as the basis for a new proposal to control gun violence. Their plan? Start a new government agency called HARPA, the Health Advanced Research Projects Agency, modeled on the military technology agency DARPA (same acronym, but the D stands for Defense), and implement what they’ve called the SAFEHOME proposal.

Advertisement
Advertisement
Advertisement
Advertisement

The Suzanne Wright Foundation did not immediately respond to our request for a copy of the proposal, but according to reporting from the Washington Post, SAFEHOME—the acronym for Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes—would develop “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence” using multiple sources, including “real-time data analytics” and technologies like phones, Apple Watches, Fitbits, and smart speakers like the Amazon Echo and Google Home. In other words: SAFEHOME would use your digital devices to keep tabs on you and determine whether you might become violent. The Suzanne Wright Foundation thinks this will work because “advanced analytical tools based on artificial intelligence and machine learning are rapidly improving,” by their measure, enough so that they think they ought to be let loose on your data.

Advertisement

If only it were so easy. Surely, artificial intelligence and machine learning are rapidly improving. But it’s still a dangerous game to apply such tools to predict something as complex and context-dependent as violent behavior. Consider, again, that the link between mental illness and violence is tenuous; there’s no evidence that people with mental illness are more likely to commit a violent crime. And the link between patterns in a person’s device data and their mental state is even less understood.

Advertisement

Mental health researchers have generally cringed at SAFEHOME’s premise because even in their carefully designed research studies, they’ve discovered it’s incredibly difficult to draw any conclusions from a person’s digital data. “Context is critical,” says Emily Mower Provost, an associate professor of computer science and engineering at the University of Michigan. Along with Melvin McInnis, a University of Michigan professor studying bipolar disorder and depression, Provost has analyzed speech in phone calls made by participants with bipolar disorder. Some of the phone call in their sample were the participants’ check-ins with clinicians, while others were their personal calls.

Advertisement
Advertisement

Provost and McInnis were able to predict participants’ moods and symptoms from recordings of their calls with clinicians, but that predictive ability was “greatly reduced” from analyzing their personal calls. “We realized this is because a person’s communication patterns change based on context,” says Provost. Consider your last few calls: how you might speak to a clinician asking how you’re doing emotionally might feel (and sound) different from how you feel and sound while calling your mom, your high school best friend, or a colleague. There’s some research that suggests that people talk in a higher pitch around people they perceive as higher status, like a job interviewer—not necessarily relevant to predicting violence, but indicative of how much social circumstance might affect how we present ourselves.

Advertisement
Advertisement
Advertisement

It’s unclear whether SAFEHOME has a specific plan for which metrics they’d target in their research. Besides voice, such a program may also examine people’s habits, like who, what, and how often they’re texting, what music or podcasts they’re listening to, what they’re searching for, or what they’re buying. There’s more of that “real-time” data available from the wearable devices they list, like Fitbits or Apple Watches: These devices measure heart rate, general movement, sleep time, and location. These variables might tell us something about how people are feeling, but the data alone are hard to interpret. For example, researchers have mixed data about how text messages correlate with depression; while some studies have linked increased texting or phone use to depression, others have found the opposite. Divorced from context, it’s impossible to know what this type of data means. Is someone’s heart rate elevated because they’re angry, or because they’re on an elliptical? Have they been sleeping less because they’re agitated, or because they have an infant at home? Without a detailed look at a person’s life, it’s difficult to interpret a person’s mental state based on these limited metrics, let alone predict whether that mental state might lead to violent behavior later on.

Advertisement
Advertisement

In theory, even if a machine learning algorithm can cobble together some predictions about your mental state and later behavior, it’s going to be wrong a lot of the time, at least in the beginning. We all want predictive tools to help people get the help they need, says John Torous, the director of the division of digital psychiatry at Harvard’s Beth Israel Deaconess Medical Center, but given what we know about predicting suicide, we’re a long way from predicting other complex behaviors like gun violence. In one recent study, trained clinicians were only able to predict 1 percent of cases in which a patient was suicidal. “Incidents related to violence are even rarer, which means they will be even harder to predict,” says Torous, who previously wrote in Slate about the issues with machine learning diagnoses. “This suggests there is much more to learn and there will not be a quick solution or panacea. The association between mental health and violence is also weak, which suggests the signal sought will be even more elusive.”

Advertisement

If HARPA somehow does find a way to predict violent behavior from device data, there are bigger troubles ahead. Implementing SAFEHOME would require serious surveillance; whose data would be used to develop the program’s algorithms, and whose data would be monitored for violent behavior? In health studies, ethics boards require researchers to anonymize participants’ data. That’s harder for digital data, says Camille Nebeker, a research ethicist at the University of California–San Diego School of Medicine. “There are researchers who have been looking at existing data sets, and even when they think they’re anonymized, they’re able to re-identify the people,” Nebeker says. The range of data points you’d likely need to make a predictive model work—location, phone call logs, searches—would be the digital breadcrumbs that make us uniquely identifiable.

Advertisement
Advertisement
Advertisement
Advertisement

There’s also the question of what action we could also take if an algorithm predicts a person might turn violent. There will likely be more false positives than true positives, says McInnis, and it’s unclear what to do with those the system flags: “Could we envision people being kept from flying, attending malls, and theaters because of some incorrectly identified risk? How could one be moved from the ‘identified risk’ category?” Nebeker told me this all reminded her of the movie Minority Report, which takes place in an alternate future where people are arrested for “thought crimes.” It’s hard to imagine how the agency could take action on such predictions without infringing on citizens’ rights.

The best-case scenario for a program like SAFEHOME would be to focus on using metrics to connect individuals with the care they need, rather than trying to find links between their behavior and a hypothetical outcome, like violence. “Technology and other medical monitoring systems should be used to empower the infirm and their care rather incorrectly isolating them as dangerous elements of society,” says McInnis.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement