Do More Patients Die at Hospitals That Experience Data Breaches?

And if so, why?

A lock icon and binary code overlay on top of a photo of a hospital.
Photo illustration by Slate. Photos by sneakpeekpic/iStock/Getty Images Plus.

Late last year, McLean Hospital in Boston reached a settlement agreement with the Massachusetts attorney general to implement new security programs following a 2015 data breach that exposed the health information of more than 1,500 people. As part of the settlement, McLean agreed to pay $75,000, but also to encrypt personal and health information on portable devices, create an inventory of those devices, maintain a written information security program, and require employees to take mandatory trainings on data security.

The McLean settlement was just one example of recent attempts to tighten information and network security at hospitals and medical centers around the world following a spate of cyberattacks and breaches targeting the health sector. Through measures like the ones McLean agreed to implement, hospitals can make their networks more robust and thereby better protect—and treat—their patients, or so the thinking goes. Now a new study suggests that those kinds of measures may have exactly the opposite effect.

As more and more objects in our daily lives become connected to the internet, from cars to pacemakers, there’s been a lot of focus in recent years on whether cyberattacks of the future will kill people—with hackers deliberately crashing victims’ cars, for instance, or depriving them of essential medical assistance. But an article published in September in the journal Health Services Research suggests that cybersecurity incidents may already be linked to increased rates of fatal heart attacks—not just at the moment when those breaches occur, but for years to come.

The researchers conducted an analysis of patients at 3,025 hospitals between 2012 and 2016 to see whether data breaches were linked to higher death rates for patients. They compared hospitals that suffered known data breaches with those that did not, looking at death rates for years after admission. The researchers found that data breaches were associated with a 0.23 percentage point increase in mortality rate within 30 days after a heart attack, and that went up to a 0.36 percentage point increase in likelihood of death two years after the breach and a 0.35 percentage point increase in likelihood of death three years after the breach. The researchers also found that breached hospitals took longer to provide electrocardiograms to patients than did those that had not been breached, and those differences, too, persisted for years following the incidents. In other words, following a data breach or other security incident, for every 10,000 heart attacks at the breached hospital, the researchers saw up to an additional 36 deaths beyond the expected fatality rate for heart attacks.

It’s an interesting study in its attempt to use real patient data to draw conclusions about the impacts of cyberattacks on patient welfare, and a timely one given how many ransomware attacks have targeted hospitals in recent years. These findings suggest that it’s not just the immediate interruption of a data breach that throws off patient care; mitigating and responding to those attacks have long-lasting impacts on patient care. However, it draws a somewhat dangerous and deeply counterintuitive conclusion: It argues that the fault lies with hospitals trying to implement stronger security measures post–data breach, such as mandating new employee security training programs and encrypting all sensitive hospital data, requiring two-factor authentication, and creating an inventory of approved devices that may be connected to their networks.

The authors write that “the remediation activities to improve security in health IT systems following a breach introduce new changes into complex work environments, which may disrupt care processes and explain our findings of reduced quality.” But this notion is overly simplistic and shortsighted. Undoubtedly, IT upgrades and updates can inconvenience workers and slow down operations in any workplace, but that is a reason to develop techniques and processes for implementing them more smoothly—not to write them off as harmful and counterproductive.

Unfortunately, the authors seem to fundamentally misunderstand the role and design of information security controls, arguing that they deliberately add friction and disruption to systems. They write, “Security typically adds inconvenience by design—making it more inconvenient for the adversary. For example, stricter authentication methods, such as passwords with two‐factor authentication, are additional steps that slow down workflow in exchange for added security. Lost passwords and account lockouts are nuisances that may disrupt workflow. The persistence in the longer time to ECG suggests a permanent increase in time requirement due to stronger security measures.”

But, bewilderingly, the data used by the researchers does not indicate whether the breached hospitals actually implemented these particular controls and, if so, whether they were the cause of the resulting delays in patient care. Two-factor authentication and lost passwords could be contributing to longer wait times for ECGs, but the link drawn in the paper is purely speculative, based on the assumption that health care facilities that suffer security incidents are likely to implement such security measures. The researchers seem to point to a recommendation of restricting or limiting security upgrades and patches in health care settings, though they stop just short of saying it outright. Instead, they advise that “breached hospitals should carefully consider remedial security initiatives to limit inadvertent delays and disruptions associated with new processes, procedures, and technologies.”

Of course, security products and services should always be carefully considered, and their potential unintended consequences are important to take into account before making any major changes. But it is alarming to see researchers imply that doing less for security would actually save more lives in the long run. It’s abundantly clear that deficient security practices in health care organizations are a crucial pathway for intruders looking to infiltrate sensitive systems and data. The heart attack fatality figures that the researchers come up with are striking, but they do not outweigh all the potential harm a hospital could cause by not patching its systems against malware or upgrading its authentication and access control mechanisms. It’s a profoundly irresponsible leap from those numbers to cautioning hospitals against using basic security controls like two-factor authentication—one that will only lead to more online attacks, more disruptions, and quite possibly more deaths.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.