The Russian hack of SolarWinds—which affected at least 18,000 of the firm’s customers, including several federal agencies—has revived a long, unsettled debate in national security circles: When Americans are hit with a massive cyberattack, should the U.S. government strike back?
At first glance, the answer seems obvious: Of course, we should strike back—an eye for an eye, a tooth for a tooth—or how else will we deter the hackers, and others like them, from striking again?
On reflection, though, the question turns more complicated. Compared with the rest of the world, the United States, in all aspects of its life, is much more thoroughly connected to computer networks. We have the most powerful and precise cyber-rocks to throw at other countries’ windows—but we live in a much glassier house. Therefore, retaliation could spark counterretaliation, and, at each cycle of escalation, we could get hurt more badly than our adversary does.
Nevertheless, even some experts who have urged caution and taken note of our hypervulnerability are now saying that we have to do something. One of them, Richard Clarke, cybersecurity chief in President Bill Clinton’s White House and author of Cyber War—one of the first books to raise alarms about the subject—told me in an email that the SolarWinds hack “is over the line and requires a response. Yes, we run the risk of an escalating round of mutual damage, but that may be what it takes for this country to start taking the long list of necessary steps to secure [our] networks and what they run.”
President-elect Joe Biden seems to agree, saying he would impose “substantial costs” on those responsible for the hack. “A good defense isn’t enough,” he added. “We need to disrupt and deter our adversaries from undertaking cyberattacks in the first place.”
Fine. But how do we do this? What costs do we impose? And how do we ensure that the disruptions deter future attacks? President Barack Obama once signed a directive declaring that the United States might respond to a cyberattack with noncyber means—for instance, through diplomacy, sanctions, or other sorts of attacks. This was a shrewd distinction, but if he ever followed through on this directive, his acts of noncyber retaliation were never made public—and, in any case, had no apparent impact on the rate and magnitude of subsequent cyberattacks.
The problem is that the whole concept of “cyberdeterrence” is inherently difficult. In nuclear deterrence, the theory is fairly clear: If you attack me, I’ll attack you, and the prospect of my counterpunch will deter you from punching in the first place. In the 75 years since Hiroshima, there has been a bold red line between the use and nonuse of nuclear weapons—which is one reason the nuclear powers have been hesitant to use nuclear weapons at all.
By contrast, in the cyber field, there are no red lines. Thousands of cyberattacks are launched every day—by mischievous hackers, criminals, terrorists, and nation-states—of varying size, targets, intentions, and effect. Where should policymakers draw the line between a nuisance and a national security threat? At what point should the U.S. government step in to mete revenge on an attack against a private company?
These are key questions not only for guiding our own policy but also for guiding the policies of our adversaries, since, in order for deterrence to work, adversaries need to know what responses their actions might provoke.
Which brings us back to the hack on SolarWinds’ Orion network management system. Was this a cyberattack—or was it merely an act of espionage? It isn’t yet clear. We know that the Russian intelligence service planted malware on a SolarWinds security-update alert; if clients clicked the alert, they downloaded the malware. This went on for eight months without detection. It can be assumed the Russians stole all the information they might want from the network. Did they also damage the network? That is, in the process of prowling through the network, did they delete or alter files—or leave behind a beacon that will wreak damage upon some future command? It’s possible, but we don’t know yet. Nor is it clear what sort of damage these digital time bombs, if they exist, might do.
The point is, before we start waging cyberwar against Russia or anybody else, we should assess the nature of the hack. Whatever it was, it was not an act of war.
Any assessment must also recognize the following: We do this sort of thing too, and have been doing it for a long time. Our cyberattacks tend to be more focused on specific targets, for specific aims. But the National Security Agency, Cyber Command, and certain units of the CIA have long been carving “backdoors” into foreign networks, roaming around in the critical infrastructure of adversaries, and planting malware that can damage this infrastructure on command.
In 2014, after realizing there was no way for America’s vital networks to defend themselves from a sophisticated cyberattack, Cyber Command adopted a policy of “active defense.” Defining the concept, Adm. Michael Rogers, the commander at the time, said the “biggest focus” would be “to attempt to interdict the attack before it ever got to us”—in other words, to get inside the adversary’s network, in order to detect him preparing an attack, then deflect or preempt it.
So, before U.S. leaders set about responding to the SolarWinds hack, they should articulate how it differs from the things that we sometimes do—why the Russians deserve punishment and we don’t. (I’m not saying that there is no distinction or cause for retaliation—only that, if there is, our leaders should be clear about what it is, in their own minds and in statements justifying their action.)
The remarkable thing is that, more than 60 years after the invention of the internet, more than 35 years after the first presidential study warning of computer vulnerability, and more than 20 years after the first known foreign attacks on U.S. computer networks, no one in a position of power has drawn the distinction between cyberespionage and cyberattack—nor has anyone struck a clear definition of cyberdeterrence or delineated what kinds of cyberattacks the government should try to deter and, if necessary, respond to.
It’s also troubling that, after all this time, the U.S. government has done such a scattered, incomplete job of patching its security holes. If we’re going to start retaliating in kind to cyberattacks as a matter of policy (or even if we’re not), we need to get more serious about beefing up our defenses. The latest defense authorization act adopts half of the 52 recommendations put forth by a congressionally appointed commission on the subject. It’s a start.
Toward the end of George W. Bush’s presidency, after getting briefed on cyberattacks day after day after day, Secretary of Defense Robert Gates suggested that the major cyber powers get together to lay out some “rules of the road” that might diffuse our mutual vulnerabilities—an agreement, say, not to launch cyberattacks on computer networks that control critical infrastructure such as dams, power grids, and air traffic control systems. He noted that even during the Cold War, the U.S. and the Soviet Union set and followed some basic rules; for instance, they agreed not to kill each other’s spies. In cyberspace, there were no rules.
“We’re wandering in dark territory,” Gates would say in these conversations. It was a phrase from his childhood in Kansas where his grandfather worked for nearly 50 years as a stationmaster on the Santa Fe Railroad. “Dark territory” was an industry term for a stretch of rail track that was uncontrolled by signals. It was a perfect metaphor for cyberspace, except that this new territory was much vaster and the danger was greater because the engineers were often unknown, the trains were invisible, and a crash could cause far more damage.
We’ve learned a lot more about this territory in the decade since Gates’ musings. Security has improved; more controls are in place. But like mutating viruses, the hackers have found new ways to maneuver around the security, new ways to manipulate the controls. The landscape is still very murky.
Support our independent journalism
Readers like you make our work possible. Help us continue to provide the reporting, commentary, and criticism you won’t find anywhere else.