Every time we install a new app or use a new web service, we make a decision about what kind of data we’re willing to provide in return. And companies have important decisions to make, too: They have to determine how clear they will be when they inform you about the ways they’ll use your data to make money.
On April 23, a story in the New York Times about the founder of Uber broke another story in passing. According to the Times, Uber purchased data about its competitor, Lyft, that was obtained via the web service Unroll.me. Unroll.me solves a pretty common problem: being subscribed to a bunch of email lists that you’ve completely forgotten about. When you use Unroll.me, it crawls through your email looking for those pesky subscriptions, gives you a list of them, and lets you unsubscribe from each at the click of a button. It’s a useful service. It’s also free.
As the saying goes, if you aren’t paying for a product, you are the product. This trade-off most often takes the form of monetization through advertisement. Companies like Facebook, Google, and Amazon want to know as much about you as possible so that they can serve you increasingly targeted advertisements. This is their business model, and most people have a basic understanding now of how companies with free products make money.
However, Unroll.me sold user data for a different purpose. The client was Uber, which (particularly in light of a string of bad press) had reason to want to keep tabs on its major competitor. According to the New York Times, Unroll.me found Lyft receipts in their users’ email and then sold the anonymized information to Uber as a way to evaluate Lyft’s business health.
The CEO’s apology comes down to we’re sorry you’re upset, noting that “while we try our best to be open about our business model, recent customer feedback tells me we weren’t explicit enough.” He points out that users agree they have read and understand their policies before they use the site—but concedes that most people probably don’t have the time to thoroughly review them.
He’s right. Research has shown that it would take the average internet user about 200 to 300 hours to read the privacy policies of every website they visit—and this isn’t even accounting for how often sites update them. This potential time spent reading represents a national opportunity cost of hundreds of billions of dollars.
I don’t believe Unroll.me did something intentionally shady. I think it followed the crowd. However, I also think that it misjudged where an appropriate line would be. Though companies like Facebook often require that users grant “transferrable” licenses in content or data, it makes far more sense for them to keep that data for themselves—to aggregate it, categorize it, and sell ads that target certain kinds of people. So though most people now have a decent model of how companies use their data to make money through advertising, having data sold directly to third parties comes as more of a surprise. However, as this situation with Unroll.me demonstrates, this happens as well.
People claim to care about data privacy. The outcry in reaction to this situation reinforces that. But in general, we mostly don’t seem to care enough to read the terms of service, to try to find out how our data is being used, or to refuse to sign up for useful apps that in the fine print require blanket permissions. We’re happier just checking the box and hoping that companies will avoid exploiting the permissions we’ve given in any way that would upset us or embarrass them.
The solution is surely not to start spending 300 hours every year reading privacy policies. But it may well be to accept the idea that selling data is part of how the free app economy works—and then to push companies to disclose their data practices in a clear way, outside of the fine print. We need to dial down the hysteria about data use (Facebook manipulates your news feeds! Instagram can serve ads with your content!), which only encourages companies to try even harder not to get “caught.”
Instead, we need to ask them to explain what they are doing and why, and how it is essential for their business model. If we want the services we use to be thoughtful about how they use our data, we should be thoughtful about how they use it, too.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.