Future Tense

How Quickly Should Companies Have to Disclose Data Breaches?

At a gas pump, the handle is covered with a trash bag that says "OUT."
Gas stations in Atlanta began to run out of gasoline after motorists rushed to fill up following Colonial Pipeline’s ransomware attack in May. Megan Varner/Getty Images

When ransomware hit Colonial Pipeline’s networks in May, the whole world knew about it within days. Similarly, when meatpacking company JBS suffered a ransomware attack later that month, the news was made public almost immediately. It’s hard to hide cyberattacks that shut down fuel pipelines or meatpacking plants, but neither company had any legal obligation to report the intrusions as quickly or publicly as they did. In fact, in the United States, many cybersecurity incidents don’t have to be reported at all unless they involve the breach of personal information.

Advertisement

A new cybersecurity bill being prepared by Sens. Mark Warner, Marco Rubio, and Susan Collins aims to change that. It would reportedly require U.S. government agencies, federal contractors, and critical infrastructure companies to report cybersecurity breaches to the government within 24 hours of detecting them. It’s a remarkably tight timeline—slightly longer than the 12-hour deadline recently imposed on critical pipeline operators by the Transportation Security Administration, but notably shorter than the 72-hour reporting window mandated by the European General Data Protection Regulation. And when the GDPR was passed, even that 72-hour timeline seemed very quick in comparison to the requirements that preceded it!

Advertisement
Advertisement
Advertisement

If this new bill passes, it could vastly expand the variety of cybersecurity incidents that companies are required to report to the government. It could also completely transform the timeline for that reporting and provide some much-needed clarity across the jumble of different, existing requirements. Most of those requirements only apply to breaches of personal information—and many of them are incredibly vague about the timeline for that reporting. For instance, the California data breach notification law mandates that companies report breaches of unencrypted personal information “in the most expedient time possible and without unreasonable delay.” In Massachusetts, the requirement is for companies to report a breach “as soon as practicable and without unreasonable delay.” (Other states offer more concrete deadlines, like Colorado’s requirement that breaches be reported within 30 days of being detected, or Texas’s requirement that breaches be reported within 60 days of their occurrence—but these are all significantly longer than the 24-hour window proposed in the forthcoming draft bill).

Advertisement

There’s some logic to leaving the timeline a little vague since different types of security incidents may warrant different responses and require different amounts of investigation. A company might want to respond to a long-term cyberespionage campaign differently than they would a breach of credit card numbers or a ransomware attack. But of those different types of attacks, only the credit card breach would always have to be reported under existing laws. So there’s no good reason to leave the mandatory reporting timelines quite as fuzzy as they are—or to exempt so many cybersecurity incidents from mandatory reporting in the first place.

Advertisement

To understand how companies have avoided having to report so many types of cybersecurity incidents for as long as they have, it’s helpful to understand a little bit of the history of cybersecurity regulation and how the existing breach reporting requirements were developed in the early 2000s. State data breach notification laws were designed primarily as tools for consumer protection—if your information was stolen, state legislators reasoned, you should be informed so you would have an opportunity to take measures to prevent your identity from being stolen, or double check your credit card bill, or even, in some cases, sue the company that had failed to protect your data. Because that was the primary motivation for these state laws, the exact timeline for when people were notified didn’t matter as much. And since many of those laws require companies to notify every individual affected by the breach, it made sense that that notification process might take a while, as companies figured out whose data had been stolen and how to contact thousands (or even millions) of people.

Advertisement
Advertisement

Unsurprisingly, many companies were very resistant to state breach notification laws when they were first rolled out, intimating that if they were required to report breaches then they might not try very hard to detect them in the first place. Fears that mandatory reporting would discourage breach detection efforts were part of the reason that these requirements spread slowly and it took more than a decade for every state to implement a breach notification law. Still today, there is no federal data breach notification law in the United States. But it’s clearer now than it was 20 years ago that opting out of breach detection isn’t really a viable option for organizations, and that if they fail to detect and remediate their own cybersecurity breaches, then other organizations will.

Advertisement
Advertisement
Advertisement

Another thing that’s changed since the advent of state breach notification laws is our understanding of the variety of different cybersecurity threats we face. Because they were designed to protect consumers, our existing notification laws only apply to breaches of personal information and not other types of cybersecurity incidents, including ransomware, denial-of-service attacks, or economic espionage. Since those types of incidents are unlikely to directly affect individuals’ risk of identity theft or payment card fraud, they weren’t seen as a priority for public notification. (One exception is ransomware attacks in which the perpetrators threaten to release stolen information if they are not paid—these sometimes blur the line between data breaches and extortion and might, in some cases, be subject to existing breach notification requirements.)

Advertisement

In fact, many incidents that don’t involve the theft of personal information probably don’t need to be disclosed publicly like breaches, so long as they are reported to and aggregated by a government office that can track general trends and disseminate that information. But because most of our breach reporting requirements are rooted in ideas about consumer protection, there is no comprehensive regime or system for tracking those statistics about anything other than breaches of personal data.

Advertisement

That leaves us largely in the dark about many other types of online threats—how often they occur, who is affected, how much damage they cause, or what types of security controls do and don’t work to protect against them. For instance, the FBI’s 2020 Internet Crime Report says it received complaints about a total of 2,474 ransomware incidents last year. Presumably, many—indeed, most—ransomware victims simply did not bother to report their experience to law enforcement and that’s why that number is so low. But we don’t really know anything about how much of an underestimate that figure is. (One commonly cited number from the firm Statista estimates that there were 304 million ransomware attacks worldwide in 2020, but it’s hard to trace the origins of that number, too.)

Advertisement

So requiring businesses to report a broader range of cybersecurity incidents—including threats like ransomware—is important if only because it gives us a better handle on how serious the different cybersecurity threats we face are, how many different organizations each one is affecting, and where we should be investing our resources. The 24-hour timeline in the new draft bill is ambitious, but it’s not unrealistic or unfair, especially given that the disclosure doesn’t need to be made publicly—only to the government. And it could be a significant corrective to a decades-long gap in our data collection about cybersecurity incidents that has allowed us to learn a lot about data breaches and very little about anything else.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement