There’s a moment in Dr. Strangelove, Stanley Kubrick’s dark Cold War comic masterpiece, when President Merkin Muffley (played by Peter Sellers) learns that an insane general has exploited a loophole in the military’s command-control system and launched a nuclear attack on Russia. Muffley turns angrily to Air Force Gen. Buck Turgidson (played by George C. Scott) and says, “When you instituted the human reliability tests, you assured me there was no possibility of such a thing ever occurring.” Turgidson gulps and replies, “I don’t think it’s quite fair to condemn a whole program because of a single slip-up.”
The National Security Agency currently finds itself in a similar situation.
One of the NSA’s beyond–top secret hacking tools has been stolen. And while the ensuing damage falls far short of an unauthorized nuclear strike, the thieves have wreaked cybermayhem around the world.
The mayhem was committed by a group called the Shadow Brokers, which in April announced that it had acquired the NSA tool (known as Eternal Blue) and published its exploit code online for any and all hackers to copy.* In May, some entity—widely believed to be North Koreans—used the the exploit code to develop some malware, which became known as WannaCry, and launched a massive ransomware attack, which shut down 200,000 computers, including those of many hospitals and other critical facilities.
Then on June 27 came this latest attack, which was launched by the Shadow Brokers themselves. This struck some security analysts as odd, for two reasons. First, the Shadow Brokers are believed to be members of—or criminal hackers affiliated with—a Russian intelligence agency, and Russians tend not to hack for mere cash. Second, the attack was slipshod: The ransoms were to be paid to a single email address, which security experts shut down in short order. If the Russians had decided to indulge in this mischief for money, it was a shock that they did it so poorly.
Now, however, several cybersecurity analysts are convinced that the ransomware was a brief ploy to distract attention from a devastating cyberattack on the infrastructure of Ukraine, through a prominent but vulnerable financial server.
Jake Williams, founder of Rendition InfoSec LLC (and a former NSA analyst), told me on Thursday, two days after the attack, “The ransomware was a cover for disrupting Ukraine; we have very high confidence of that.” This disruptive attack shut down computers running Ukrainian banks, metro systems, and government ministries. The virus then spread to factories, ports, and other facilities in 60 countries—though Williams says it’s unclear whether this rippling effect was deliberate. (Because computers are connected to overlapping networks, malware sometimes infects systems far beyond a hacker’s intended targets.)
By the way, the attack left the ransomware victims, marginal as they were, completely screwed. Once the email address was disconnected, those who wanted to pay ransom had no place to send their bitcoins. Their computers remain frozen. Unless they had back-up drives, their files and data are irretrievable.
It’s not yet clear how the Shadow Brokers obtained the hacking tool. One cybersecurity specialist involved in the probe told me that, at first, he and others figured that the theft had to be an inside job, committed by “a second Snowden,” but the forensics showed otherwise. One possibility, he now speculates, is that an unnamed NSA contractor, who was arrested last year for taking home files, either passed them onto the Russians or was hacked by the Russians himself. The other possibility is that the Russians hacked into classified NSA files. It’s a toss-up which theory is more disturbing; the upshot of both is, it could happen again.
So should the NSA stop hacking computers out of concern that bad guys could steal its tools and use them for their own nefarious purposes? This remedy is probably unreasonable. After all, spy agencies spy, and the NSA spies by intercepting communications, including digital communications, and some of that involves hacking. In other words, the cyber equivalent of Gen. Turgidson would have a point if he told an angry superior it’s unfair to condemn a whole program for a single slip-up.
Besides, the NSA doesn’t do very many hacks of the sort that the Shadow Brokers stole—hacks that involve “zero-day exploits,” the discovery and use of vulnerabilities (in software, hardware, servers, networks, and so forth) that no one has previously discovered. Zero-day exploits were once the crown jewels of the NSA’s signals-intelligence shops. But they’re harder to come by now. Software companies continually test their products for security gaps and patch them right away. Hundreds of firms, many created by former intelligence analysts, specialize in finding zero-day vulnerabilities in commercial products—then alerting the companies for handsome fees. Often, by the time the NSA develops an exploit for a zero-day vulnerability, someone in the private sector has also found it and already developed a patch.
More and more, in recent years, the NSA chooses to tell companies about a problem and even help them fix it. This trend accelerated in December 2013, when a five-member commission, appointed by President Obama in the wake of the Snowden revelations, wrote a 300-page report proposing 46 reforms for U.S. intelligence agencies. One proposal was to bar the government from doing anything to “subvert, undermine, weaken, or make vulnerable generally available commercial software.” Specifically, if NSA analysts found a zero-day exploit, they should be required to patch the hole at once, except in “rare instances” when the government could “briefly authorize” the exploit “for high-priority intelligence collection,” though, even then, only after approval not by the NSA director—who, in the past, made such decisions—but rather in a “senior interagency review involving all appropriate departments.”
Obama approved this recommendation, and as a result his White House cybersecurity chief, Michael Daniel, drafted a list of questions that this senior review panel must ask before letting the NSA exploit, rather than patch, the zero-day discovery. The questions: Would this vulnerability, if left unpatched, pose risks to our own society’s infrastructure? If adversaries or crime groups knew about the vulnerability, how much harm could they inflict? How badly do we need the intelligence that the exploit would provide? Are there other ways to get this intelligence? Could we exploit the vulnerability for just a short period of time, then disclose and patch it?
A 2016 article in Bloomberg News reported that, due in part to this new review process, the NSA keeps—and exploits for offensive purposes—only about two of the roughly 100 zero-day vulnerabilities it finds in the course of a year.
The vulnerability exploited in the May ransomware attack was one of those zero-days that the NSA kept for a while. (It is not known for how long or what adversaries it allowed us to hack.) The vulnerability was in a Microsoft operating system. In March, the government notified Microsoft of the security gap. Microsoft quickly devised a patch and alerted users to install the software upgrade. Some users did; others didn’t. The North Koreans were able to hack into the systems of those who didn’t. That’s how the vast majority of hacks happen—through carelessness.
It may be time to view surfing the internet on computers as similar to the way we view driving cars on the highway. Both are necessary for modern life, and both advance freedoms, but they also carry responsibilities and can do great harm if misused. It would be excessive to require the equivalent of drivers’ licenses to go online; a government that can take away such licenses for poor digital hygiene could also take them away for impertinent political speech. But it’s not outrageous to impose regulations on product liability, holding vendors responsible for malware-infected devices, just as car companies are for malfunctioning brakes. It’s not outrageous to force government agencies and companies engaged in critical infrastructure (transportation, energy, finance, and so forth) to meet minimal cybersecurity standards or to hit them with heavy fines if they don’t. It’s not outrageous to require companies to program their computers or software to shut down if users don’t change or randomize their passwords or if they don’t install software upgrades after a certain amount of time. Or if this goes too far, the government could require companies to program their computers or software to emit a loud noise or flash a bright light on the screen until the users take these precautions—in much the same way that drivers hear ding-ding-ding until they fasten their seatbelts.
Some of these ideas have been kicking around for decades, a few at high levels of government, but they’ve been crushed by lobbyists and sometimes by senior economic advisers who warned that regulations would impede technical progress and harm the competitive status of American industries. Resistance came easy because many of these measures were expensive and the dangers they were meant to prevent seemed theoretical. They are no longer theoretical. The cyberattack scenarios laid out in government reports decades ago, dismissed by many as alarmist and science fiction, are now the stuff of front-page news stories.
Cyberthreats will never disappear; cybervulnerabilities will never be solved. They are embedded in the technology, as it’s developed in the 50 years since the invention of the internet. But the problems can be managed and mitigated. Either we take serious steps now, through a mix of regulations and market-driven incentives—or we wait until a cybercatastrophe, after which far more brutal solutions will be slammed down our throats at far greater cost by every measure.
*Correction, June 30, 2017: This article originally misstated that the NSA tool stolen by the Shadow Brokers was called WannaCry. It was called Eternal Blue, and its code was used to create WannaCry. (Return.)