War Stories

Something in the Water

The cyberattack on a water treatment plant in Florida is a long-overdue wake-up call.

Water runs from a faucet of a water fountain outside
Kenzo Tribouillard/AFP via Getty Images

The only surprise about last week’s hacking of a water treatment plant in Florida is that this sort of thing doesn’t happen more often. The intrusion was all too easy. The hacker entered the plant’s control system through a commonly used tool called TeamViewer, which lets engineers monitor the network’s machines—and adjust their settings—remotely. The hacker boosted the level of lye—an ingredient in drain cleaners—from 100 parts per million, its normal level, to 11,100 parts per million, which would have poisoned anyone drinking the water.

Advertisement

Hundreds of people in the small town of Oldsmar, near Tampa Bay, avoided illness, possibly death, only because a plant operator noticed the manipulation on the system’s monitors and manually restored the settings to normal. If the operator had been reading or snoozing, letting the system run on autopilot, as sometimes happens at computer-controlled utilities, disaster would have struck.

Advertisement
Advertisement

Industrial systems have been run by automatic controls since the 1970s, but as long as they were physically isolated, security wasn’t a problem. When the internet came along in the 1990s, the company’s managers adapted, and eagerly so, since applying automatic controls across broad networks—for instance, vast electrical grids, waterworks, pipelines, rail lines, and so forth—would make life much more efficient.

Advertisement

Around the same time, senior government officials began learning that these public networks were vulnerable, but most of the firms running the networks were privately owned—and they fervently resisted regulations. Nor were they inclined to spend much money on the problem voluntarily. Doing so would be expensive, and the threat at the time seemed theoretical. (For more on this history, see my book, Dark Territory: The Secret History of Cyber War.)

Things became a bit less theoretical in 2007, when the Department of Energy conducted an experiment called the Aurora Generator Test. By this time, officials had been probing the vulnerability of automatic controls for a decade, but Aurora was the first tangible test of whether a physical object could be destroyed in a remote cyberattack. A 2.25-megawatt power generator, weighing 27 tons, was installed inside a test chamber at the Idaho National Laboratory. On a signal from Washington, where officials were watching on a monitor, a technician typed a mere 21 lines of malicious code into a digital relay. The code opened a circuit breaker in the generator’s protection system, then closed it just before the system responded, throwing its operations out of sync. Almost instantly, the generator shook, and some parts flew off. A few seconds later, it shook again, then belched out a puff of white smoke and a huge cloud of black smoke. The machine was dead. You can watch a video of the test on YouTube.

Advertisement
Advertisement

Earlier still, in 2000, a disgruntled former worker at an Australian water-treatment center hacked into its central computers and sent commands that disabled the pumps, allowing raw sewage to flow into the water. In 2001, hackers broke into the servers of a California company that transmitted electrical power throughout the state, then probed its network for two weeks before getting caught. These incidents, recounted in Kim Zetter’s 2014 book, Countdown to Zero Day, were ignored or dismissed as flukes until the Aurora Generator Test—sponsored by the U.S. government—revealed that this was a systematic problem.

Was the hacker of the water treatment plant in Florida also a disgruntled former worker? Federal agents are tracking the forensics, but the culprit hasn’t yet been found.

Advertisement

The bigger danger is that governments have also been engaged in this sort of hacking, and for a very long time. Their intent isn’t necessarily to do damage; often it’s to gather intelligence, to see how foreign countries design or protect their critical infrastructure networks. But some countries—including China, Russia, Israel, France, and, yes, the United States—have buried implants in these networks, implants that can be activated to wreak disabling damage in case, or in advance, of war.

Advertisement

If the Florida hacker had been more sophisticated, if he’d possessed the resources of a nation-state, he could have cloaked his actions, and the plant operator might never have noticed the manipulation.

For instance, in the 2009–10 Stuxnet operation, the elaborate U.S.-Israeli hacking of Iran’s Natanz nuclear reactor, the main targets were the reactor’s centrifuges—the spindles that enrich uranium—which the hackers sabotaged by slowing them down or speeding them up. A crucial side target was the array of sensors that monitored the reactor’s functions. The Stuxnet team implanted false data into these sensors, to trick the Iranian scientists into thinking that the centrifuges were spinning at the right speed. When the devices broke, the scientists blamed faulty supplies, and political authorities suspected inside saboteurs.

Advertisement

A few years earlier, in 2007, four Israeli fighter jets attacked and destroyed a nascent nuclear reactor in Syria. The planes managed to elude Syria’s air defense system—an advanced system, recently purchased from Russia—because, ahead of time, Unit 8200, Israel’s secret cyberwar organization, hacked into the system’s radar screens, implanting a false image, so the screens appeared blank; the operators never saw the planes coming.

Advertisement

What if the Florida hacker had implanted false data into the sensors monitoring the water treatment gauges so the plant’s operator thought that everything was fine, that the level of lye was still 100 parts per million instead of 100 times that much? Hundreds of people would have been poisoned, and nobody would know why.

Advertisement

Only a very sophisticated hacker, with access to lots of resources, could pull off such a feat. But private hackers are getting more sophisticated—and it’s not at all out of the question that some other nation-state could do this. Soon after Stuxnet was exposed, it became clear that the United States and Israel were far from the only countries with highly skilled cyberwarriors in their armies.

As far as it managed to go, the Florida hack probably could have been blocked. Bob Gourley, co-founder and chief technology officer of OODA LLC, a cybersecurity firm, told me that, while he hasn’t seen any forensics on this incident, his “educated guess,” based on analyses of hacks like it, is this: “The plant probably did not have two-factor authentication set up and maybe even left default passwords in place. So the bad guy just logged in to the cloud control panel for the account and was able to control things.”

More than a decade after the Aurora Generator Test, more than two decades after the disgruntled Australian, more than three decades after the first U.S. government studies detailing the vulnerability of industrial networks, the companies that run our critical infrastructure—the mainsprings of our social and economic lives—still haven’t learned the lessons, haven’t made the necessary investments, and haven’t taken the obvious precautions, in part because the government hasn’t made them. It’s long past time to change that.

Advertisement