Has data breach fatigue inured you to headlines about high-profile cyberattacks? It’s time to wake up. This week, we’ve learned about a new string of high-profile cyberattacks, this time aimed at accessing the personnel records of U.S. government employees. The breach of the Office of Personnel Management, which allegedly originated in China, was apparently uncovered during attempts to step up cybersecurity.
As we commence yet another news cycle of pondering such breaches, it’s worth taking a moment to consider the Internal Revenue Service breach last month. Of all the high-profile data breaches of the past year, it offers perhaps the best chance for us to learn some lessons about data security. That’s not because it was the biggest or most sophisticated or most costly of those breaches—far from it. It’s because unlike most data breaches—in which it is difficult to untangle the layers of corporate secrecy, anonymously sourced rumors, and media hyperbole—the IRS breach was actually officially investigated and discussed in a public venue: a hearing held by the U.S. Senate Committee on Finance on June 2.
Make no mistake: Organizations very rarely speak publicly in any detail about the mechanics of how they were breached and what they did and did not do to protect themselves. Law requires institutions to disclose certain information about breaches—how many records were breached, for instance, or what types of data were stolen. But almost none of those things actually help us understand how breaches happen or, more importantly, how to defend against them.
As data breaches go, we actually know a fair bit about the IRS incident: how many accounts were compromised (about 100,000) and what kinds of information the hackers accessed (transactions, line-by-line tax return information, and income reported to the IRS). But, more importantly, we know how it happened—the thieves were able to download taxpayer information from the IRS “Get Transcript” application (intended for use by taxpayers to access their own records) by submitting personal information about taxpayers, including their Social Security numbers, birth dates, filing status, and addresses. At the Senate hearing, IRS Commissioner John Koskinen stated that these pieces of information were “obtained from sources outside the IRS.” In other words, one of the lessons of the IRS breach seems to be that data breaches can spawn further data breaches. Poorly protected or less sensitive pieces of information may be mere stepping stones for accessing much more sensitive data.
The real focus of the hearing, however, was not on how the breach happened, but why it happened—or, more specifically, why the IRS had failed to prevent it. Treasury Inspector General for Tax Administration J. Russell George, for instance, noted that the IRS had failed to implement 44 recommendations made in previous security audits. He emphasized that the IRS “has not yet implemented key patch management policies and procedures needed to ensure that all IRS systems are patched timely and operating securely” and that the IRS intrusion detection system “was not monitoring a significant percentage of IRS servers.” He added that the IRS incident response team “was not reporting all computer security incidents to the Department of the Treasury, as required” and that the agency’s “incident response policies, plans, and procedures were either nonexistent, inaccurate, or incomplete.”
This is an astonishingly specific list of a breached organization’s security weaknesses. Thanks to George’s testimony, we know more about the computer security practices of the IRS than we do those of Sony, Target, Anthem, JPMorgan Chase, or just about any of the other companies that have been breached in recent memory.
And yet, for all that, it’s a list that on some very fundamental level does not make any sense. The security measures that George said the IRS neglected—patching systems, monitoring servers for intrusions, reporting incidents—are all important, but none of them are obviously related to the breach reported in late May. Those thieves don’t appear to have succeeded because the IRS systems weren’t patched or because servers weren’t being monitored. Rather, their success seems to have hinged on the IRS’s decision to let users authenticate their identities with just a few pieces of personal information that, as it turned out, were not too difficult for bad guys to get their hands on.
There are two primary security lessons to be learned from this particular incident: We can do a better job authenticating people, and we need to make it harder for thieves to leverage stolen data, either to access more information or to steal money. The IRS breach brings up all sorts of important and interesting questions about our reliance on static Social Security numbers as identifiers. Furthermore, criminals can—without too much effort—piece together scattered bits of personal information to assemble a fairly compelling facsimile of our identities.
So why all the chatter of patching and intrusion detection and incident response policies? Don’t get me wrong, those are important discussions to have. But it’s misleading to suggest that this particular incident is tied to those specific defensive weaknesses, it’s perplexing why anyone would suggest that those are the lessons to be learned from this breach, and it’s frankly foolish to imply that any or all of them would have somehow prevented this breach.
George, when pressed at the hearing, conceded that he could not give a “definitive answer” as to whether the IRS could have prevented the breach by implementing the 44 audit recommendations, but he argued that “It would have been much more difficult had [the IRS] implemented all of the recommendations.” George is right, in one sense: No set of defenses could have guaranteed protection against a breach—perfect security doesn’t exist. So the best way to think about and assess defenses is in terms of how much harder they make it for attackers to succeed. But he’s wrong to conflate so many different security issues, and wrong to conclude with so little evidence that several seemingly unrelated defenses would have made it any more difficult for the perpetrators.
So as we move on to debating and investigating the recent breach of government employee personnel records, let’s try to keep the conversation a little bit more specific and on-topic. There are lots of different defenses in the world of computer security—not to mention lots of different issues and ideas tangled up in the notion of cybersecurity—so it’s easy to get sidetracked when it comes to evaluating what went wrong in certain breaches, and even easier to resort to the language of sweeping generalities. But individual incidents—especially those like the IRS breach, or the compromise of government personnel records, that may be officially investigated by the government—are rare opportunities to learn specific, tangible lessons about security. Let’s be sure not to waste them.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.