For all the cybersecurity headlines and big stories of the past few years, we still don’t know very much about computer security incidents: who is carrying out attacks, why, and how. So policy efforts around cybersecurity—including the Protecting Cyber Networks Act, which passed in the House last week—have focused primarily on trying to make more information about security incidents available to people who can use it. The PCNA has stirred up considerable controversy among critics who fear it would provide the government with new avenues to access Internet users’ personal information, suggesting a potential dark side to the otherwise innocuous-sounding goal of “information sharing.”
The bill specifically authorizes sharing of “cyber threat indicators” and “defensive measures.” Cyber threat indicators, according to the bill’s lengthy definition, include malicious reconnaissance, security vulnerabilities, and “any other attribute of a cybersecurity threat.” What might this actually look like in practice? A company targeted by denial-of-service attacks might share information about the kind of traffic that they’re being bombarded with, or a system administrator who finds malware being used to exfiltrate sensitive data might share the code to help others look for it or even the details of how it was delivered (what the phishing email it was attached to said or who it was sent by or what the attachment was named, for instance). A company experiencing espionage might report on the lateral movement it sees on its own network as spies copy and package information for their own use—for instance, there may be particular patterns to how files are being copied, compressed, or sent to outside destinations (every morning at 4 a.m., 50 files are copied from server A to server B, compressed, and sent to a server in Latvia).
Lots of this sharing already happens, both informally and through sector-based Information Sharing and Analysis Centers. But making this information easier to share more widely, especially by removing the threat of legal consequences, is supposed to help others—companies and government agencies alike—figure out what to look for on their own systems and how to deal with it. Except there’s a problem, beyond the ambiguity surrounding what kinds of data can be shared and with whom and whether the privacy protections in place are adequate. It’s also hard to pin down exactly what purpose information sharing is supposed to serve.
Like its Senate counterpart, the Cybersecurity Information Sharing Act, PCNA seems to be primarily designed to help government and private actors detect and defend against threats in real time, to help people protect against the attacks and intrusions they see on their networks today based on specific attack signatures. That’s a worthy goal—but it’s not the only goal we should be thinking about in making cybersecurity policies.
An equally—perhaps more important—function of sharing information about computer security incidents ought to be learning from them over longer periods of time to help inform design and engineering decisions. Instead of sharing snippets of malware code, attacking IP addresses, or other signature information, this would mean sharing information about how attacks were enabled by technological design choices—the root causes of those attacks—and the possible ways they might have been interrupted by defensive interventions.
It’s the difference between being able to say “look out for this traffic pattern or this malware or this email—it’s trouble!” and being able to say “over the past few years, we’ve seen attackers repeatedly taking advantage of this specific design flaw and we need to do something about it.” And the PCNA, like many cybersecurity policy initiatives before it, is much more interested in the short-term, real-time interventions than longer-term security learning.
Information sharing policies are important because they play a large role in dictating how much we know about the state of computer security. For instance, we hear a lot about the big data breaches in part because many states require breaches of personal information to be disclosed. Those laws, which are themselves information sharing regimes of a sort, were driven by a different purpose, namely consumer protection. That goal dictated the information they required organizations to disclose (usually the type of information breached and number of records stolen) as well as who they had to report it to (typically the people whose information was breached).
As a result of these state laws, we see headline after headline about major data breaches, but we know much less about computer security incidents that don’t involve breaches of user information. We routinely learn how many records were stolen and where they were stolen from, but it’s rare that we gain any insight into how the breaches happened or what might have prevented them. This makes sense in the context of policies intended to protect consumers from fraud and identity theft. But it leaves those of us following those incidents with data points that are not particularly helpful for trying to figure out how to improve security measures.
The PCNA and the Senate Cybersecurity Information Sharing Act are not interested in sharing more information about a broader range of incidents with the public, nor are they interested in getting at the root causes of attacks and the engineering decisions that enable them. They’re designed to allow a totally different kind of information—attack signatures, which can aid short-term defense efforts but are unlikely to be helpful in informing longer-term design decisions—to flow more freely among a limited set of private and government actors. That’s a perfectly fine goal, but in trying to tailor an information sharing policy for short-term defense, I worry we lose sight of the longer-term need for learning from these incidents about how to design our technologies more securely.
That’s probably not a goal that can be met by the PCNA or CISA or any policy intended to promote information sharing that’s useful for immediate threat response. If anything, the primary criticism leveled at them—that the policies allow the sharing of too much data with too many people for too wide a variety of purposes—suggests we need to pare down, rather than expand, the types of information that it covers and their audience and uses. So by all means, let’s try to tailor cybersecurity information sharing policies to fill particular functions and be vigilant about their use for outside purposes—but let’s also keep in mind that there’s more than one thing we can learn from those incidents. We don’t need a cybersecurity information sharing policy, in other words. We need several.