This piece was originally published in New America’s digital magazine, The New America Weekly. On Wednesday, March 9, New America’s Cybersecurity Initiative will host its annual Cybersecurity for a New America Conference in Washington, D.C. This year’s conference will focus on securing the future cyberspace. For more information and to RSVP, visit the New America website.
“Offense dominated.” In cyber security, this phrase refers to an attacker’s inherent advantage over the defender. The unwanted guest needs only to find a single flaw in a system to gain access. It also presents the defender as the hapless little Dutch boy—trying in vain to harden every aspect of the IT infrastructure. The idea of “offense dominated,” with its strong, ruthless attacker and weak victim, is a simple one. And it delivers a discouraging message to practitioners.
Cybersecurity pundits endlessly harp on this. Each new cyberattack represents a step toward the end of the digital era—toward domination by the evil offense. Bloated cybersecurity budgets, malware arms races, and government intervention are just some of the predicted consequences. Some are even happening today. Soon we will have to admit defeat, dumping our technology and—as pointed out by a German government official—return to typewriters.
But I don’t think so. I think we’re missing something. And it is this: If we’re going to think of cyberwar as conflict, then we need to afford it the complexity and nuance we ascribe to conventional warfare.
I used to be in the Army. In 2003, I deployed with my Infantry battalion to Baghdad, Iraq. We turned an old guest house into our Forward Operating Base, out of which we lived, worked, maintained our trucks, and so on. And yes, we did everything we could to harden that base against car bombs, suicide attacks, and the occasional grenade—just as security administrators harden their systems against advanced persistent threats, denial of service threats, and the latest social engineering techniques.
But there was a limit as to what we could do. There were only so many soldiers to fill sandbags.
Turning our base into Fort Knox was simply not an option. We had to make decisions as to how we build our defenses. Many of the decisions had to do with the inherent vulnerabilities of our building, but many had to do with another factor: the enemy.
Our enemies in Iraq were the budding insurgents. These guys were often well-trained. They were deadly. And we had to be ready.
But they had their limits, and, like many units, we did everything we could to gather information on them to understand what their capabilities were.
At the time, we knew that the insurgents in our area did not have tanks. They did not have anti-aircraft weapons, nor did they have armor-piercing bullets. We also knew they had access to explosives, stockpiles of small arms, significant expertise with both, and individuals who were willing to conduct suicide attacks, either by foot or in a car.
This information about the adversary was real. It was actionable. We used this information to prioritize our defense. We didn’t simply wait for another American unit to be attacked. And we didn’t set up a fake base in order to watch how the insurgents might attack it. Instead, we looked to gather information on how the insurgents were planning attacks, what weapons they were going to acquire, and so on.
We talked to people—a lot of people. And we kept our finger on the pulse of the enemy.
This is what is largely missing from cybersecurity today. The belief—or, rather, the knowledge—that we are not hapless defenders. Today, we are focused on turning our computers and networks into impenetrable forts. This is just not sustainable. Smart adversaries will always build higher ladders to scale higher walls.
We share information with others who’ve been attacked. We watch honeypots for how an attacker might attack our system. Granted, this is useful information. But it’s not predictive intelligence. It only tells us what the enemy is currently using. The best intelligence is predictive. It gets inside the enemy’s planning and decision cycles. Intelligence should also be actionable—it needs to drive the decisions of chief security officers. It needs to allow them to make smart strategic choices. These choices should then significantly limit the adversary’s possibilities. They should force him to go back to the proverbial drawing board before we find our customer data for sale on Tor or hear about our national security secrets on Pastebin. Or, worse, learn that our military personnel files have been happily downloaded.
My colleagues and I at Arizona State are taking steps toward predictive and data-driven intelligence. (Disclosure: ASU is a partner with Slate and New America in Future Tense.) We actively study malicious hacker communities and are using sophisticated techniques, from disciplines such as machine learning and game theory, to drive important cyber defense decisions. The malicious hacking community is thriving and sophisticated. We find evidence of “fully undetectable” malware and “zero day” exploits for sale every day. We see malicious hackers offer their services for hire. We see conversations where more experienced blackhats give advice on how to leverage stolen PayPal information. The enemy is real, but it is also knowable. Computer systems have grown very complex and sophisticated, and having a community to discuss your efforts to launch a cyber-attack is a great enabler.
Knowing the enemy—in addition to ourselves—should be the first step toward a successful defense.
This article is part of the cyberwar installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on cyberwar: