A year ago, Google announced it would stop automatically scanning and analyzing the text of your Gmail messages to target you with ads. The move was widely praised as a victory for online privacy.
It may have come as a surprise, then, to see fresh headlines this week about Google allowing people to read Gmail users’ emails. On Monday, the Wall Street Journal ran an investigative story headlined, “Tech’s Dirty Secret: The App Developers Sifting Through Your Gmail.” It detailed how various third-party companies have gained Gmail users’ permission to scan their inboxes and, in some cases, have even allowed human employees to read people’s messages.
The explanation is not that Google has been backsliding on its privacy practices. It’s that the public and the media are starting to set the bar higher in the wake of Facebook’s Cambridge Analytica scandal—reassessing, along the way, our relationships with some of the world’s biggest internet companies. And that’s a very good thing.
For more than a decade, Google used software to “read” people’s Gmail messages and show them ads related to the subject matter of their personal communications. While Google insisted that its human employees weren’t literally reading people’s mail, the practice was nonetheless regarded by many as invasive and creepy. Google’s critics (and rivals) held it up as an example of how the company’s business model required violating the privacy of its users.
That changed in June 2017. The company’s paid Gmail service, part of its G Suite cloud software product for businesses, was booming—without the invasive email-scanning or targeted ads. To further fuel its growth, Google opted to try to shore up its privacy reputation by making the free, consumer version of Gmail less invasive, too. Google said its computers would stop scanning people’s messages for ad-targeting purposes. But they kept scanning them for other purposes, such as filtering spam and malware, personalizing search results, and suggesting “smart replies” to emails. In May, on the heels of Facebook’s Cambridge Analytica fiasco, NBC News reported in some depth on all the ways Google was still harvesting users’ personal data, including their Gmail messages.
This week, the Journal highlighted another, arguably more troubling form of Gmail data collection that the NBC News report didn’t mention. It’s the email monitoring that Google allows outside developers to perform on users’ inboxes, provided they get those users’ permission. From the Journal’s story:
The internet giant continues to let hundreds of outside software developers scan the inboxes of millions of Gmail users who signed up for email-based services offering shopping price comparisons, automated travel-itinerary planners or other tools. Google does little to police those developers, who train their computers—and, in some cases, employees—to read their users’ emails, a Wall Street Journal examination has found.
Google requires those app developers to tell users in advance what type of data they’ll be collecting, and users have to agree to that before they can start using the apps. To Google’s credit, that permissions pop-up is written in plain, concise English, unlike the long, legalistic privacy policies that accompany most online services. Still, the free internet has trained many of us to simply agree to whatever stipulations are necessary to install or launch an app once we’ve decided that we want it. And as the Journal highlighted, the permissions that many users granted to Gmail app developers turned out to be more wide-ranging than users might have reasonably expected. In some cases, that included granting further access to relatively obscure third parties, which the user might never have heard of.
For instance, an email management and analytics company called Return Path appears to have gained access to the inboxes of some 2 million Gmail users. It did this not by asking them directly but through a network of 163 “partner” apps. Those apps ask users for email-monitoring permission in exchange for some free service—then, in turn, they grant Return Path access to users’ inboxes as well.
One example is an app called Earny that promises to save you money by scanning your inbox for evidence of purchases on items that have since dropped in price, so that you can get a refund via your credit card company’s “price protection” policy. The Journal reports that Earny not only scans your email itself, but it also partners with Return Path to allow Return Path to collect and process your emails, as well. Return Path then uses that data to tell its corporate clients, such as Overstock.com, about your email-reading behavior, so that they can hone and better target their marketing emails. The Journal reports that Return Path allowed some of its human employees to read people’s emails, as well, in order to better train its filtering algorithms.
Earny says its privacy policies make it clear that third parties such as Return Path will be able to monitor your email according to their own privacy policies. But it’s hard to imagine that Gmail users who signed up for Earny really understood exactly what would happen to their data and how many companies would be able to subsequently use it for their own purposes. Even a Google representative I spoke with acknowledged that she didn’t fully grasp how Earny’s relationship with Return Path worked or how that accorded with Google’s own privacy policies.
There’s no evidence so far that Earny, Return Path, or anyone else abused their access to users’ data in damaging ways, as a shady third-party Facebook app developer did by handing Facebook users’ personal information to the political targeting firm Cambridge Analytica without their consent. That’s good news for Google, because without evidence of obvious harm, it’s unlikely that Google CEO Sundar Pichai or Alphabet CEO Larry Page will find themselves hauled before Congress, as Facebook’s Mark Zuckerberg was.
Still, there are parallels between the two sets of revelations. Both Facebook and Google, in a push to turn their products into “platforms” upon which third parties could build apps, were willing to give up their control of users’ sensitive personal data in the bargain. It appears so far that Google was more careful about this than Facebook was—but not careful enough, by today’s heightened standards.
It’s worth noting that Gmail isn’t the only email service that allows app developers and data miners to scan users’ emails. Both Microsoft and Oath, the Verizon subsidiary that includes the remnants of AOL and Yahoo, appear to grant various forms of email access to third parties. Oath in particular has touted its ability to mine users’ messages for shopper-marketing data.
Google made it clear that it’s taking the Journal story seriously, publishing a post on its Keyword blog to explain and defend its privacy practices. To guard against abuse of Gmail users’ data by third-party apps, Google said it enforces “a multi-step review process that includes automated and manual review of the developer, assessment of the app’s privacy policy and homepage to ensure it is a legitimate app, and in-app testing to ensure the app works as it says it does.” (The Journal story, however, quoted at least one app developer saying it had never seen evidence of such human review.)
Google’s blog post also highlighted a feature called Security Checkup that periodically encourages users to review their privacy settings, including the access they’ve granted to Gmail app developers. You can use Google’s Security Checkup right now to see what apps you’ve authorized to scan your inbox.
When I did so, I found that I’d given permission to Apple’s OS X operating system, Apple’s iOS Mail app, and Apple’s Calendar app for Mac, all of which I have no problem with. But I also saw that I had given my Gmail keys to a third-party calendar app called CalenMob, which I haven’t used in years. I quickly revoked that access.
Here’s another parallel between the Gmail-reading story and Cambridge Analytica: In both cases, the story revolved around a long-standing industry practice that most users, and the media, once tacitly accepted. Yes, we were granting access to our sensitive data left and right, but that was a price many of us were willing to pay for the “free” internet. We implicitly trusted not only the likes of Facebook and Google, but tiny, obscure startups and individual app developers with access to our social media profiles and personal email inboxes.
In retrospect, it’s hard to imagine what we were all thinking. But surely part of it was that big Silicon Valley tech companies were broadly viewed by the public and mainstream media as benign forces in society, improving our lives at little to no cost through the magic of software. And those big tech companies encouraged us at every turn not to think too hard about the trade-offs; to sign terms of service agreements we didn’t have time to skim, let alone comprehend; to sacrifice our antiquated notions of personal privacy on the altar of big data.
That’s why Facebook’s attempts to pin the Cambridge Analytica scandal on a single rogue app developer rang so hollow. It wasn’t a man named Aleksandr Kogan that Facebook users were trusting when they signed up for a seemingly harmless quiz app called “This Is Your Digital Life”—it was Facebook.
Google at least seems to understand that Google itself, and not just the likes of Earny or Return Path, bears responsibility for ensuring that Gmail users’ data isn’t abused. A few years ago, Google’s policies—requiring apps to get users’ explicit permission in straightforward language, reviewing developers’ credentials, and making sure their permission requests were consistent with their stated purpose—would have been regarded as sufficient, if not supererogatory.
But after Cambridge Analytica, the implementation of Europe’s strict new privacy regulation, and mounting disillusionment with Silicon Valley and the free internet it gave us, Google’s earnest if modest efforts are no longer enough. Society is no longer looking the other way when big tech companies let their users’ sensitive information get passed around to shady third parties. We saw this in the skeptical responses from Congress and the U.K. Parliament to Facebook’s excuses for the Cambridge Analytica debacle. We saw it last month when the major U.S. wireless carriers agreed to stop sharing their users’ real-time location data with certain third-party data aggregators. And we’re seeing it now in the scrutiny of Google’s Gmail privacy practices.
So far, Google has not announced any changes or crackdowns in response to the Journal story, a spokesperson told me. But it should: The arrangement by which Return Path obtains and packages Gmail users’ information for corporate customers is not something that users of the world’s largest email client ought to accept. Nor should we accept similar arrangements from Microsoft or Oath.
There are real trade-offs when we hold web platforms to higher privacy standards. Behemoths such as Facebook and Google consolidate their control of users’ data, while the freewheeling app ecosystems that have helped launch so many startups wither. The backlash against human employees occasionally reading people’s emails, in particular, seems potentially misguided. According to the Journal, Return Path let some of its employees read a subset of Gmail users’ messages for a limited time with the express purpose of training its algorithms to better filter out their personal emails from its collection systems. If we’re going to rely on machine-learning algorithms at all, there has to be room for humans to train those algorithms to handle our data properly.
All that said, the pendulum is swinging in the right direction. With the U.S. government so far demonstrating neither the will nor the capacity to protect people’s online privacy—even as representatives of both parties agree it’s a problem—it’s up to the media, the public, and nonprofits to exert pressure on tech companies to change their ways. And that’s just what they’re doing—finally.