Google Did the Right Thing

Despite negative coverage, the company handled the vulnerability around Google Plus properly.

Sundar Pichai, outside, looking down at his smartphone.
Google CEO Sundar Pichai in Sun Valley, Idaho, on July 12.
Drew Angerer/Getty Images

Every day, software developers find and fix hundreds—probably thousands—of bugs and vulnerabilities in code. Many of these are not a big deal, but some of them are serious, like the vulnerability in Google’s social networking platform Google Plus that made users’ data available to outside developers, as an internal review at the company revealed earlier this year. We found out about it Monday, when it was reported in the Wall Street Journal. That same day, Google said it would be shutting down Google Plus for consumers following this discovery and also announced several other changes to its privacy policies and user settings intended to strengthen users’ control over their own data and how it is used. It’s unclear whether Google’s decision to go public was prompted by the WSJ investigation or vice versa. Either way, the privacy changes are very much for the best, and Google Plus is unlikely to be deeply missed. (I make this prediction based solely on my experience teaching college students about the Federal Trade Commission’s 2011 investigation of Google’s previous social network platform, Google Buzz, which none of my students have heard of. I then describe it as “the precursor to Google Plus” … which none of my students have heard of, either.)

Google’s announced changes are, by and large, clear improvements in protecting its customers’ data, and there is no evidence that the Google Plus vulnerability led to any breaches of user data. But the incident has garnered mostly negative publicity for Google. The piece in the Wall Street Journal ran under the headline “Google Exposed User Data, Feared Repercussions of Disclosing to Public.” (In the print edition, the same article ran under the headline “Google Hid Data Breach for Months.”) An article in Engadget took a similar tack with the headline “Google Exposed Data for Hundreds of Thousands of Users.”

The strong implication of this language (especially the Journal’s use of the word breach) is that user data was, in fact, stolen from Google—something that no one has uncovered any evidence of or even alleged. If Google had found evidence of someone accessing users’ personal information, it would have been required to inform its users, thanks to the data breach notification laws in almost every state in the U.S., as well as European privacy regulations. Instead, it found a vulnerability in its software—something that happens every day—patched it immediately, and then spent several months weighing how best to respond and whether to disclose it voluntarily.

Admittedly, a software vulnerability that granted anyone using Google APIs access to Google Plus users’ data is not your everyday, run-of-the-mill software vulnerability. That’s probably why the company ultimately did decide to make several other policy changes. But, judging by publicly available information, its response and decision-making process were in no way as nefarious or malicious as much of the media coverage has made it sound.

It is unfair, and frankly unwise, to accuse a company of a big cover-up for not making a big public announcement about a software vulnerability that, so far as anyone can tell, was never exploited to steal data. Unfair because we have never expected tech companies to announce every vulnerability—even the major ones—to the public. The state notification laws that are responsible for our being told about data breaches apply only to incidents in which people’s personal information has actually been stolen, not those in which people’s information might possibly have been stolen due to a technical bug that there is no evidence anyone outside the company even knew about, much less took advantage of. It’s true that these are relatively narrow laws that leave companies with a lot of latitude to not report a whole range of other security incidents that don’t involve the theft of customer information, and maybe they should be widened. (For instance, denial-of-service attacks, ransomware, and corporate espionage all typically fall into the category of security breaches that companies would not have to disclose to the public.)

But even if you believe that companies should be required to disclose more of their security incidents to allow for better data gathering and so consumers can make more informed decisions about companies’ security postures, it’s still tricky to imagine a reasonable policy that would require the disclosure of a vulnerability like the one Google found. Where and how do you draw the line between the really serious vulnerabilities that have to be reported and the trivial ones that can be handled internally? And if you do succeed in differentiating between, say, the vulnerabilities that make it possible for outsiders to conduct large-scale data breaches of customer data and those that don’t, how do you ensure that requiring disclosure of the former group doesn’t dissuade companies from aggressively hunting for those vulnerabilities in the first place?

The Wall Street Journal takes Google to task for an internal company memo in which the company’s legal and policy staff speculated that disclosing the vulnerability would cause regulators to take a closer look at the company, invite comparisons to Facebook’s Cambridge Analytica controversy, and prompt Congress to summon CEO Sundar Pichai for testimony. But as the Journal’s own coverage of the incident makes absolutely clear, Google was completely right to be wary of the negative publicity the incident might draw. After all, there was the Journal hailing it as a hidden “data breach” at the top of the front page even though there is no evidence any user data was breached by outsiders.

Again, we don’t know whether Google would have gone public without the Wall Street Journal investigation. But when a company finds a vulnerability and puts in place cautionary measures intended to shore up similar problems in the future, only to be met with the kind of coverage usually reserved for companies that have stood by and let hackers steal millions of people’s personal information, it is hard to see what incentive anyone has to do anything more than the absolute bare minimum required by law.

It’s not necessary to applaud Google for finding and fixing a security vulnerability—that’s what it’s supposed to do. But it’s a mistake to cast it as the villain for behaving exactly the same way that every tech company does every day. That type of coverage doesn’t help consumers because it conflates real security breaches, where consumers may need to take steps to protect their identities, with the routine developer work of identifying and patching vulnerabilities. It doesn’t encourage more transparency and disclosure among tech companies because it reinforces that the only thing companies have to look forward to if they reveal security problems is a wave of negative publicity. It’s a reminder that we still don’t have good language—or good policies—for making important distinctions between different types of computer security risks and helping people understand what threat, if any, those risks pose to them.

Disclosure: Slate is a partner with New America and Arizona State University in Future Tense, and the author has been a fellow at New America. Google has donated money to New America, a think tank, as has former Google executive chairman and CEO Eric Schmidt, who also once served as chairman of New America’s board. In August 2017, a former New America employee who was critical of Google alleged that he was fired because Schmidt held undue influence at the organization, a charge New America denied.