War Stories

Infrastructure Weak

The time to do something about protecting our power plants and waterworks from Russian hackers was 20 years ago.

An electric power plant.
Thinkstock

The New York Times reported last week that the Russians have been hacking into U.S.
and European nuclear power plants, waterworks, and electrical grids, with the aim of sabotaging or shutting them down at will.

The front-page story sparked almost no follow-up. That could be because these sorts of hacks have been going on for a long time. But on the other hand, these sorts of hacks have been going on for some time! And (here’s the frightening thing about the story) we’ve done very little about the intrusions, even as we’re getting better at detecting them.

Improvements in detection might account for some of what the Times sees as a surge of hacks.

Several cybersecurity specialists, including former National Security Agency officials, tell me that it’s hard to tell whether, in fact, the hacks have grown in number—or whether we’re just looking for them more intensely and, therefore, finding more.

The Times story views the intrusions—which it says have been on the rise since 2015—as part of a pattern of Russian cyberoperations, including the hacking of our presidential election. But in fact, they began a long time ago, and their roots are not entirely Russian.

As far back as 1967, at the dawn of the internet, a few brilliant scientists warned that computer networks were intrinsically vulnerable to outside interference. The alarm was first sounded in high political circles in 1984, with a national security decision directive signed by Ronald Reagan. In 1997, soon after a slew of industries—banks, transportation lines, electrical grids, power plants, and more—started putting their operations online, a blue-ribbon panel appointed by Bill Clinton concluded that this vulnerability now extended to the nation’s “critical infrastructure.”

“Today,” the so-called Marsh Commission report stated, “a computer can cause switches or valves to open and close, move funds from one account to another, or convey a military order almost as quickly over thousands of miles, as it can from next door, and just as easily from a terrorist hideout as from an office cubicle or a military command center.”

Even a year before the Marsh report, a task force of the Pentagon’s Defense Science Board bemoaned the “increasing dependency” on vulnerable computer networks as “ingredients in a recipe for a national security disaster.” The report recommended more than 50 actions to be taken over the subsequent five years, at a cost of $3 billion. Like many similar reports cranked out in the 20-plus years since, it was ignored.

A decade later, on March 4, 2007, the U.S. Department of Energy conducted an experiment—called the Aurora Generator Test—to see whether clever hackers could destroy an object by purely cyber means. The video (which you can watch on YouTube) clearly shows that they could.

It was around this time that the NSA embarked on Operation Olympic Games, aka Stuxnet, a joint U.S.-Israeli plot to sabotage Iran’s nuclear program by hacking into the computers that controlled how fast the reactor’s centrifuges were spinning. As Michael Hayden, a former NSA and CIA director later said, “Somebody has crossed the Rubicon.”

Its architects saw Stuxnet—the most complex piece of malware ever devised at that time—as a military program, designed to prevent Iran from building a nuclear weapon. But it was also—and certainly the Iranians saw it as—an attack on “critical infrastructure.”

Well before then, the major cyberpowers of the day—the United States, Russia, China, Israel, and France—were hacking into one another’s military networks, stealing weapons blueprints, and probing command-control systems. After the Aurora Generator Test and Stuxnet proved that critical infrastructures—civilian and military—were not just theoretically vulnerable, cyberwarriors set their eyes on those targets too.

Last week’s Times story claims that earlier cyberops against power plants and waterworks were designed to map out their computer networks, mainly for espionage, whereas the recent intrusions have gained access to the operational systems themselves and have thus laid the groundwork for an attack.

However, several former NSA officials told me that, in this respect, the old and new attacks are not very different, in part because the distinction between cyberespionage and cyberwar is not so clear.

In any case, the distinction between espionage and attack is not always so clear.

Starting in the 1990s, the NSA attached labels to three kinds of cyberoperations—computer network defense, computer network attack, and computer network exploitation. The difference between CND and CNA was clear; the wildcard was CNE—defined, literally, as operations that exploit vulnerabilities. CNE operations could be defensive (probing an enemy’s network to see if an attack is in the making or just to see how the network functions) or offensive (probing the network to prepare the battleground for an attack).
The technology for CNE and CNA is the same; the operations look the same; they, in fact, are the same, except the CNA requires just one more step—the attack itself, which can be carried out in lightning speed.

More than 20 nations now have armies with cyberwar units, and a growing number of them are hacking into foreign infrastructures. They may do this to steal industrial secrets, probe for soft spots, prepare for an attack, or simply test their capabilities. In the eyes of their opponent, motives hardly matter, as the ability to accomplish one of those goals can swiftly segue into another.

Cyberteams are hacking into critical infrastructures in the United States more frequently than was once the case. Robert M. Lee, a former NSA official who is now the founding CEO of Dragos, an industrial cybersecurity firm, says that, until recently, maybe one team was spotted inside some critical infrastructure in the course of a year. In 2017, he said, five different teams were spotted, successfully hacking into “dozens or hundreds” of targets.

If hackers are now more easily detected, can they just be kicked out of the network? It’s not so simple, especially if the hacker is sophisticated. When Adm. Mike Rogers, now the NSA director, ran the Navy’s Fleet Cyber Command at the start of the decade, a serious virus was infecting vast swaths of U.S. Navy networks, and it took months for even this elite squad of hackers and counter-hackers to track, trace, and evict the intruder. Private companies and public utilities have far fewer resources at hand.

Even if a hacker could easily be kicked out, it might not be a good idea. Jake Williams, a former NSA operator and now principal of Rendition Infosec, a cybersecurity firm, says, “Nation-states have multiple CNE tools. They don’t know which one you’ve caught, but if you kick it out, they do know. They’ll then deploy new tools that you might not know about, and you’ll lose the ability to track them.”

So, what should we do? Is there anything we can do? Life would be less convenient, but much more secure, if the industrial executives of the 1990s had taken note of the official warnings and not hooked the heartbeat of their operations to the internet. The same impulse has infected the U.S. military. In every war game that tests the vulnerability of some unit’s computerized command-control system, the intruder always gets in.

As with other forms of warfare, a strategy of deterrence can keep an attacker at bay. For a country to be deterred from launching an attack, its leaders have to know what consequences they might face. One reason no leaders have fired a nuclear weapon in anger since 1945 is that they fear the object of the attack—or its nuclear-armed ally—will retaliate in kind. Maybe they don’t entirely believe the retaliation would happen, but they’ve heard the threat uttered so many times, they at least have to weigh the risks very carefully.

But nobody has ever defined what deterrence means in cyberwar. One break we’ve had in the past decade or so is that other countries, notably Russia and China, have steadily copied our mistake of putting critical infrastructures online; as a result, a state of mutual deterrence (if you attack us, we’ll attack you) has naturally evolved. But beyond that, our political and military leaders haven’t done much systematic thinking.

Nobody knows what consequences they might face from a major cyberattack. As for minor cyberattacks, they happen every day, at no cost, and no one has ever defined the line between major and minor attacks, a fact that might tempt some to take bigger risks than they might otherwise. If someone launches a cyber attack on a bank, is it the government’s responsibility to retaliate? What about an attack on six banks or on two small power plants?

More could be done to protect critical infrastructure from cyberattacks, but a purely defensive approach has its limits. Above all else, the government needs to formulate a strategy for cyberdeterrence—a statement of what it would do in response to which kinds of attacks. A group of strategists managed to come up with a statement just months after Hiroshima, and the military created programs to put the strategy into place in the next few years. Twenty years after the Marsh Commission report, a dozen years after the Aurora Generator Test and Stuxnet, we still haven’t taken that first step.