On Wednesday, Reuters reported that the Federal Reserve detected 51 cyber breaches between 2011 and 2015, according to documents that the news organization obtained through a Freedom of Information Act request. The report prompted panicky headlines: “Fed records show dozens of cybersecurity breaches,” “Fed records show over 50 cybersecurity breaches: report,” “Fed Had Many Cyber Breaches in Recent Years, Reuters Reports,” “Federal Reserve under attack by hacker spies.”
But those headlines may do more harm than good when it comes to providing useful information about online threats to the public. Through no fault of Reuters’ (the documents were apparently heavily redacted), the records obtained from the Fed seem to have contained far too little information to tell us anything of any value about these incidents—how they were perpetrated, for instance, or what information the attackers were able to obtain.
I don’t blame anyone in the media for covering the report—after all, here I am writing about it myself—and I don’t even blame the headline writers for emphasizing the “dozens” of breaches the Fed suffered. Nevertheless, it’s important to consider what a strong deterrent these types of headlines are for any organization considering releasing information about their cybersecurity incidents and investigations. Even though the Fed is not a private company, the cyber threats it faces are likely to be relevant to many companies, and its decisions about what security information to reveal to the public can influence how industry makes similar decisions.
Except for breaches that result in the disclosure of individual consumers’ personal information, companies are generally not required to share any public information about the cybersecurity incidents they detect. Last year, the U.S. government passed measures intended to make it easier, or less risky, for companies to share that information voluntarily—but amid all the other concerns about that legislation and how it is written, there is still a lingering question about whether any companies will actually want to share security information if that choice is left entirely up to them.
Suppose you run the information security team at a big company. You employ a number of very smart, technically savvy security professionals; you implement excellent defenses and have extremely sophisticated detection tools that allow you to block a large number of threats as well as identify and investigate the ones that penetrate some piece of your system. By tracking thousands of “near-misses” and investigating incidents, your company has accumulated a significant body of knowledge about what types of threats are out there, how to detect them, and what works (and doesn’t work) to stop them. Other organizations—especially those without your resources and expertise—could probably learn a lot from that knowledge. But would you want to benefit the greater good by voluntarily publishing a report on what you’ve learned about security, if you knew that, at the end of the day, the headlines would just be about how your company faced thousands of threats?
Of course not.
And to be clear, those numbers are meaningless. Those 51 breaches, in which Fed information was disclosed to unauthorized people, apparently include everything from “hacking attacks to Fed emails sent to the wrong recipients,” according to Reuters. We don’t know who was behind the breaches. We don’t know whether they successfully accessed information or stole money. So, in other words, over the course of the past five years, Fed employees misaddressed emails and/or suffered malicious breaches a total of 51 times. Some other numbers from the release: From 2011 through 2013, malware infected the Fed systems eight times, and in 2012 there were four instances of espionage. If anything, we should perhaps be suspicious of how low these numbers are. From 2011 to 2015, I probably sent more than 51 misaddressed emails and had more than eight encounters with malware all by myself.
So what have we learned, really, from these records? Essentially nothing. Or, at least, nothing that is likely to offer anyone any insight into the computer-based threats financial institutions are facing and how best to protect against them.
What we have learned (not for the first time) is that U.S. government agencies are not going to lead the way when it comes to disclosing useful security breach information. And if agencies like the Fed aren’t willing to serve as models—if they won’t reveal anything until forced to by FOIA and even then will reveal only the most minimal information—then what do they expect from industry? The Fed’s release just reinforces the extent to which the U.S. government is unwilling to lead by example even while urging companies to share more about security breaches.
Furthermore, any company that has experienced more than 50 breaches in the past five years (and, by the way, that is probably pretty much every single major company with any half-decent detection capabilities) has to be reading the headlines about the Fed and hardening its resolve to never, ever reveal anything about its own security posture. That’s not to say we should be praising the Fed for sharing very little information about a relatively small number of breaches—it hasn’t done anything applause-worthy in responding to a FOIA request with the bare minimum of information.
But perhaps we shouldn’t be crucifying it, either, over a few empty numbers. If we do that, it could be a very long time before anyone offers up actually useful information about cybersecurity breaches—and even longer before we start figuring out the right lessons to take from the wealth of threat information that has been collected in silence and protected in organizational siloes for the past decade.
Which brings me back to my original question: If information about cybersecurity incidents is so vague and ambiguous that it doesn’t do anyone any good, can releasing it actually be harmful? I’m starting to think it might be.
Do the numbers distract us from trying to get the real details behind these incidents? The Fed chose not to provide the details, and, presumably, their reasoning was that providing any technical detail about these incidents would be more dangerous to its security than leaving the descriptions vague (hence the heavy redactions). That in itself is not terribly comforting since it implies that the Fed is still concerned about people finding out about the vulnerabilities that were exploited in these earlier incidents. Even less comforting is the possibility that the Fed simply doesn’t know how any of these breaches were perpetrated or whether they resulted in the disclosure of sensitive information or the theft of money.
Setting aside the misguided and all-too-common belief that there is any real security to be obtained through obscurity, there is still the question of what example the U.S. government wants to set for private industry when it comes to sharing information about security incidents and what lesson private industry will learn from the way that the Fed’s release was received and reported on. On both fronts, I fear, the information the Fed provided has served to move us further away from assembling the data needed to give a clear picture of the threat landscape and how we should be protecting ourselves.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.