Future Tense

When a Breach Is Not a Breach

Sometimes it’s standard operating procedure in contemporary digital culture.

Illustration: a service member wears a smartwatch that is circled. Photo illustration by Slate. Photos by Thinkstock.
Photo illustration by Slate. Photos by Thinkstock.

A lot of people were shocked recently when they learned that the Strava Global Heatmap effectively exposed sensitive military intelligence by displaying jogging routes around supposedly secret bases in dangerous locations like Mogadishu. But the biggest surprise is that all of this information had been available—even publicized—since Strava updated and refined the granularity of its heat map in November. This breach didn’t happen after laborious code-cracking by highly trained cybersecurity experts or malevolent hackers. It came from a smart college student poking around online.

Advertisement

We need a new term for a security breach that is not a security breach, because it is actually standard operating procedure in contemporary digital culture. Today, we agree to the exposure of our sensitive, personally identifying information almost every time we download an app or join a social network. The default setting for most technology services is that you share all of your information in the free version (with minimal privacy adjustments available that are usually difficult to navigate). If you want more control, you have to pay for the premium service, and even then, your data is not your own.

Advertisement
Advertisement
Advertisement

Predictably, Strava’s initial response was to refer users to an old blog post that could help them “check their privacy settings.” In other words, they blamed the soldiers for exposing classified intelligence instead of taking responsibility for making the company’s own privacy policies more transparent to users.

Advertisement
Advertisement

The Pentagon, too, emphasized the need for military personnel to have “situational awareness,” rather than insisting that data tracking companies develop “privacy awareness.” These responses are unsurprising in their emphasis on the responsibility of individual users for maintaining their own data security. Unlike the European Union, whose citizens have expressed a collective will to assert and accept far stricter data privacy laws that protect individuals over corporate profits, U.S. law makes the standard operating procedures of data mining companies completely legal.

Experts on wearable technologies and digital profiling have started saying that we need to shift from talking about surveillance to talking about dataveillance. The word surveillance conjures up Cold War images from The Americans, of spies with cameras hidden in their lipstick, wiretapped telephones, and secretive state-sponsored monitoring of enemy combatants. While these activities may continue today, most of the “surveillance” that takes place is done by technology companies, not the government. They exchange the information—which might not seem particularly revealing at first glance—for revenue in the form of ad money and payouts by data brokers. These practices happen passively in the background while we are using apps, search engines, and social networking sites that we willingly participate in.

Advertisement

In the case of sensitive military intelligence, the potential harms are easy to imagine. What about the risk of harms from posting a civilian’s mundane daily activities online? Who cares about my unremarkable jogging path, what time I go to work, or where I prefer to buy groceries? A recent story demonstrated how fitness apps that track exercise routes put women at particular risk for harassment, showing how gendered our ideas about privacy and data display can be in fitness-driven social networks. This scenario further points to the long history of racial profiling that might be hidden by a running map on Strava. College student Nathan Ruser was inspired to dig into the Strava Heatmap data after his dad commented that the map shows “where rich white people are” in the world. Indeed, many commentators have recently shown how seemingly harmless practices of dataveillance reproduce social inequality. Like old imperialist maps that showed darkness where civilization supposedly did not exist, the heat map’s illuminations obscure as much as they reveal.

Advertisement
Advertisement
Advertisement
Advertisement

But calls for people to simply opt out miss an important point. The most successful companies make their dataveillance part of their marketing. Users join Strava, Facebook, Instagram, Twitter, and so on precisely because they are social, because they capture users’ data and put them on display for friends and competitors alike. We shouldn’t be expected to simply “opt out” of social life online.

Advertisement
Advertisement

As Strava struggles to deal with the backlash, this problem—balancing the functionality people want with the privacy they deserve—is coming into focus. Users have started complaining that the public “Segments” feature of the app, which allows users to compete against other people’s workouts, has been malfunctioning. Strava says that this is because it is experimenting with alternative approaches to privacy, stating, “We are reviewing features that were originally designed for athlete motivation and inspiration to ensure they cannot be compromised by people with bad intent.” A huge part of the app’s appeal is the public display of the same type of data that created the security breach. There’s no option to share a route without providing compromising personal information. Social life has become one big security breach.

Advertisement

But we shouldn’t see being social online and being subjected to dataveillance as mutually constitutive conditions. In fact, the same complaints can be made of social life online and offline, as residents of London can readily attest. That city has so many closed-circuit televisions that even people who prefer their social life in the flesh are subjected to constant surveillance and tracking. These practices evolved under strict European Union data privacy guidelines that make dataveillance harder online, but easy offline.

Advertisement

Online versus offline is no longer the relevant distinction—instead, we need to think of it as dataveillance or no dataveillance. The idea that total continuous surveillance is acceptable and necessary for national security is a product of large-scale political ideologies that are difficult to change. But the implementation of these practices is allowed or prohibited by policies and laws that companies and governments do have the power to change. Vocal users who object to the status quo also have a part to play.

Advertisement
Advertisement

The EU model shows that alternative options do exist. Companies could be required to be more transparent about what we are agreeing to when we hit “download.” They could offer us a range of options for how much information we want to share, rather than the all-or-nothing model we have today. We could be required to “opt in” to data profiling practices, instead of being told to “opt out” somewhere deep down in our privacy settings.

We could also use privacy policies that actually spell out what is happening to users’ information and allow us to choose who sees, saves and sells what data, instead of burying information in walls of text that no one really reads. But what about developing new business models that start from the premise that users’ data is off-limits? Maybe that would lead to new kinds of open-source social networks that don’t need to make money off of data profiling practices. Dataveillance has become so pervasive that it now seems almost reasonable to trade off privacy for a social life. But social networks existed long before the internet, and they didn’t rely on false tradeoffs to get their participants to opt in.

If Strava’s military users had to opt in to share their name, location, and movements with the world when they joined that social network, I’m guessing they would have declined. The opt-in model would pose a challenge to the current business strategies of companies who profit from data-brokering. But it would also provide a much-needed reset on a data culture whose norms reflect a world we no longer live in.

Advertisement