On Wednesday, March 9, New America’s Cybersecurity Initiative will host its annual Cybersecurity for a New America Conference in Washington, D.C. This year’s conference will focus on securing the future cyberspace. For more information and to RSVP, visit the New America website.
So, what does cyberwar mean anyway?
At core, when we talk about cyberwar, we’re just talking about warfare conducted through computers and other electronic devices, typically over the Internet. As the very ’90s prefix cyber- (when was the last time you heard someone talk about cyberspace with a straight face?) suggests, it’s been part of our cultural and political conversations since the early ’80s. In recent years, however, such conversations have picked up as those in power become more conscious of our reliance on computers—and our consequent vulnerability. Perhaps more importantly, information like that disclosed by Edward Snowden has demonstrated that governments have already made preparations for virtual conflict, whether or not they’re actively engaging in it now. (Click here for a cheat sheet.)
You haven’t really answered my question.
That’s partly because it’s hard to provide a definitive answer. Experts tend to argue about the way that we use the term cyberwar. Some suggest that we should use it to discuss only acts of war that do real damage by purely digital means. In this sense, Cyber World War I would presumably be a war conducted entirely online, one in which there would be no opportunity for “boots on the ground.” Loss of life—if there were any—would presumably come about through attacks on the digital components of critical infrastructure rather than from bullets or bombs.
Since many feel that we’re unlikely to ever find ourselves in such a situation, it might be more practical to speak of a cyberwar as a conflict in which the digital simply plays a central part. But if we take that approach, we might just be giving a new name to the way we already do things.
What does cyberwar actually look like?
If we’re really going to keep things simple, we could probably identify four kinds of cyberwarfare, though whether each belongs under the umbrella is controversial (and Futurography will address that in an upcoming piece):
The first is the grayest, since it’s hard to distinguish from conventional espionage. It involves activities like mapping and accessing protected computer systems in order to acquire information. Much of this falls under the rubric of “signals intelligence,” a concept that’s been circulating in intelligence communities since the early years of the 20th century.
Second is massive, large-scale intellectual property theft, which is also arguably a subset of old-fashioned forms of espionage. It has, however, accelerated with the rise of the Internet, costing the U.S. hundreds of billions a year according to some estimates. Cybertheft has become serious enough that in 2015 the U.S. and Chinese governments formally agreed to not support or encourage it, though neither admitted that they were doing so in the first place.
Third, there are direct attempts to disable computers and networks. Here, we’re in the terrain of distributed denial of service attacks, Trojans (malicious programs that make their way onto computers by pretending to be something else), and so on. We see this all the time when someone, say, conducts a DDoS attack against an organization he doesn’t like. But the stakes get even higher when the same thing happens to a bank or a hospital.
Finally, you have attempts to use computers to cause physical damage through viruses and other forms of malicious code. When people worry about cyberwarfare, this is usually what they have in mind: They’re imagining attempts to knock airplanes out of the sky or blow up pipelines. With a few important exceptions, this branch of cyberwar is mostly theoretical.
Am I crazy for thinking a lot of this just sounds like run-of-the mill hacking?
You’re not, which is why many references to cyberwar employ the term to primarily (or even exclusively) describe the actions of nation-states. If we take that approach, cyberwar names what countries are up to when they hack one another. But if we apply the term too broadly, it amps up the stakes of things that nations are doing to one another as a matter of course, such as surveillance of foreign leaders. If we insist on classifying ordinary peacetime intelligence-gathering as a kind of war, we could end up escalating conflicts where none existed previously.
A further complication comes from the “attribution problem,” the difficulty of definitively assigning blame for a cyberattack. This can give governments plausible deniability, allowing them to shunt their actions onto private entities, even when it seems clear who’s done what. When hackers accessed a Pentagon email system in 2015, for example, reports suggested that Russia was responsible, but it was difficult to discern whether individuals or state actors were actually at fault. Under such circumstances, how do we know what does and doesn’t count as an act of war?
Can we get back to the physical damage thing? Have we ever seen anything of the kind?
Some claim that a logic bomb was responsible for an explosion on the trans-Siberian pipeline in the ’80s, but that assertion has been widely disputed. The most famous real-world example that we’re sure is actually real would therefore be the Stuxnet worm. Released by the United States under the Obama administration—and seemingly developed in collaboration with Israel—Stuxnet was an attempt to disrupt the Iranian nuclear program. It worked by sabotaging centrifuges, speeding them up or slowing them down in ways that made them fall apart. Before it spread into the wild and was detected, this cyberweapon destroyed almost one-quarter of Iran’s nuclear centrifuges. More recent revelations indicate that the United States planned subsequent actions—under the code name Nitro Zeus—against the eventuality that talks with Iran fell apart.
Did Iran strike back?
Not directly, but hackers seemingly associated with Iran have committed other acts of digital aggression. In 2014, after right-wing American businessman Sheldon Adelson advocated the use of weapons against Iran, hackers attacked computers and servers of Adelson’s Sands casinos. Instead of stealing money or information, the hackers—identified as Iranian by U.S. intelligence—destroyed the casinos’ computers, doing tens of millions of dollars of damage in the process. As Slate’s Fred Kaplan writes in his book Dark Territory, this “was a new dimension, a new era of cyber warfare,” since the hackers’ intent was solely “to influence a powerful man’s political speech.”
This attack on Sands may have been more representative of the existing norms of cyberwar (again, depending on how you define it, and whether you include attacks on commercial entities) than Stuxnet. The hackers’ modus operandi—shutting down computers and defacing Web pages—resembles that of Anonymous, which trades in denial of service attacks and similar aggression, more than it does the work of the National Security Agency.
Are there other examples of countries coming after the United States?
In 2008, NSA analysts discovered what Kaplan calls “a few lines of malicious code” operating in the U.S. Central Command’s network. The worm, which sought to scan the system for vulnerabilities, seems to have found its way into the system—one that wasn’t connected to the public Internet—through a flash drive with Russian origins. Despite that, its real origins remain uncertain.
The most famous politically motivated breach in recent years, however, is probably the Sony Pictures Entertainment hack. When the company was preparing to release The Interview, a hacker group called the Guardians of Peace came after it. The hackers destroyed thousands of computers and appropriated immense amounts of information, much of which was subsequently leaked online. The FBI publicly connected the Guardians of Peace to the North Korean government, which felt the film was disrespectful. Since this wasn’t one government directly attacking another, some wouldn’t classify it as cyberwar. But it’s still an example of a politically motivated attack executed through the Internet, with lasting consequences, apparently at the behest of a government.
OK, that sounds bad, but what about the real nightmare stuff? Should we worry about attacks on the power grid?
It’s a scary thought, one that some political candidates have explored at length: If someone were to knock out critical elements of our infrastructure, we’d presumably be looking at the collapse of civilization as we know it. For the most part, though, our power systems aren’t as vulnerable as, say, Sony’s computer network, partly because their critical components aren’t typically connected to the Internet. Despite that, the U.S. government is attentive to the possibility that such attacks might occur, and the Defense Advanced Research Projects Agency has invested millions to prevent them.
If cyberwar is mostly about the actions of countries, how does terrorism fit in?
This is where things get even fuzzier: Generally, when people talk about cyberterrorism, they’re thinking about recruitment. President Obama famously described ISIS as “a bunch of killers with good social media,” referring to the role that Twitter, Facebook, and other sites have played in the group’s spread. Social media may also help predict the actions of terrorist organizations. On both sides of the equation, however, we’re still dealing with ways that the Internet facilitates real-world action, not things done solely through the Internet.
It’s certainly conceivable that a terrorist group could employ the weapons of cyberwar in support of its cause. The source code for Stuxnet is publicly available, for example, and though it was designed to target the Iranian systems in particular, some worry that it might be possible to transform it into a different sort of weapon. But we haven’t seen anything of the kind to date.
You mentioned Snowden earlier. How do the NSA’s data collection programs fit into all this?
If you’re referring specifically to the metadata-based domestic surveillance initiatives, they don’t really, since by most understandings they’re more a matter of policing than of out and out warfare. That being said, the documents Edward Snowden leaked contain information about cyberwarfare projects. Presidential Policy Directive 20—a document created under the Obama administration—affirms the importance of offensive cyberoperations against “U.S. national objectives around the world.” It also acknowledges, however, that developing capabilities for such attacks “may require considerable time and effort.” In other words, we’re probably not going to tumble into Cyber World War I any time soon.
So is there any reason to worry?
As Kaplan points out in Dark Territory, one of the most significant dangers may be the strangely asymmetrical position that the U.S. finds itself in. Our reliance on computer systems means that we’re actually more vulnerable than many of the targets that PPD-20 nods to.
More generally, the most important function of conversations about cyberwarfare may be to ensure that cyberwarfare never actually happens. The U.S. started preparations for cyberwarfare after a panicked Ronald Reagan saw WarGames in 1983. As far as we know, no one’s ever come close to actually hacking the country’s nuclear arsenal, and it’s possible that we have Reagan’s fearful reaction to thank for that. In that sense, we probably should keep talking, but it may be best to dial back the alarmism: Cyberattackers could be digging through our systems in search of exploits and back doors, but they won’t be turning off the lights.
So what should we be talking about then?
That’s partly on you! We’ll be focusing on this topic all month, and we want that coverage to be as useful as possible. What questions remain for you? What still troubles you? And, of course, what do you think?
This article is part of the cyberwar installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.