The FBI dropped its case against Apple on Monday, saying that it had “successfully accessed the data” stored on the San Bernardino, California, killer’s iPhone and, therefore, no longer needed the company’s assistance—which the bureau had been demanding in court and which Apple had been resisting.
This may seem like a happy ending all around, but in fact it’s a bad outcome for both parties—a bit more so for the bureau, at least in the short term.
Contrary to appearances, the fight was never about the specific phone used by Syed Farook. If it were—if FBI Director James Comey believed the phone contained data that was urgently needed for an investigation into terrorism—he could have sent a “Request for Technical Assistance” to the National Security Agency, as the FBI has done in such cases many times. The NSA could easily have hacked into the phone and turned over whatever it extracted to the bureau, officials say.
No, the FBI vs. Apple fight was always about—both parties rhetorically raised the stakes to make it about—the principles of privacy vs. security (or corporate security vs. national security) and whether decades of cooperation between telecoms and the intelligence agencies can survive new advances in encryption.
For the past six months, the bureau—in fact, the Obama administration—had been seeking a test case that it could very likely win, and officials thought they had one in the case to compel Apple to open up Syed Farook’s iPhone 5C. There were no Fourth Amendment issues; the phone belonged to Farook’s employer, which had consented to government inspection. There were no privacy issues; Farook was dead and thus had no privacy rights. The political optics were as favorable as they come. Farook was no petty criminal; he was a mass murderer with connections to ISIS.
The catch was this: The law that the bureau invoked, the All Writs Act of 1789, which has frequently been the basis for search warrants and wiretaps, lets citizens refuse to obey a writ if the government can get what it needs through some other means. In this case, the FBI announced last week that an “outside party”—according to officials, a private cybersecurity firm—had devised a way to hack into Farook’s phone without Apple’s assistance. The firm’s technique has since been tested. The fact that the FBI dropped the case means that the technique worked.
And so the FBI has to let this test case go and wait—who knows how long—for another tempting case to materialize. Apple, which some lawyers and industry experts believed had a weak case to begin with, legally and politically, should be heaving a sigh of relief—but there’s a downside from its viewpoint, too.
For years, Apple has crafted a brand based on its absolute commitment to security. Unlike Google and other giants of Silicon Valley, Apple doesn’t sell your data. Its operating systems aren’t licensed to other companies and are, so it’s claimed, much more resistant to hacking. In the past month’s court fight, the FBI was polishing Apple’s luster by claiming that not even the Federal Bureau of Investigation could crack the iPhone’s code without the help of Tim Cook’s engineers.
And now, some hole-in-the-wall private hacking firm has done the undoable—hacked its way in. Apple may have dodged a costly court battle, but its brand has been bruised.
There are dozens of companies around the world engaged in the lucrative trade of finding “zero-day exploits”—vulnerabilities of computers, operating systems, networks, and so on that no one has yet discovered. The National Security Agency has an elite corps of superhackers—called the office of Tailored Access Operations—that specializes in this task. But neither TAO nor any foreign counterpart holds a monopoly on these tools and techniques; clever hackers in private enterprise—some of whom once worked in government outfits—can perform some of these tricks, too. (Some reports have identified the hacker of Farook’s iPhone as Cellebrite, a firm made up of several ex-officers in Unit 8200, the Israeli military’s cyberwarfare organization. The company has, in any case, advertised its ability to break Apple’s latest codes.)
The more legitimate zero-day outfits sell their discoveries to the companies whose vulnerabilities they’ve uncloaked. Most companies pay hefty bounties for this work: Better to get the news—and ideas for a fix—from friendly or neutral parties than to get mercilessly hacked by the Russians or Chinese. (Google, whose Chrome system was hacked by China a few years ago, now offers $100,000 to anyone who hacks into its source code.) Apple is one of the few Silicon Valley behemoths that don’t pay bounties to hackers. This may be why the small firm—whether it was Cellebrite or something like it—went to the FBI instead of Apple.
Tim Cook will no doubt demand that the FBI reveal how the firm got in so that his engineers can patch the hole. The FBI wants to try out the technique on some other iPhones that law-enforcement officers around the nation have been trying to unlock in criminal investigations. The government’s legal obligations here are unclear—as is this whole realm of the law, with its sources and case histories long predating cellular and digital technology.
Apple hasn’t adopted a hacker’s bounty program because doing so would be to admit that its brand is a bit of hype, that its wares can be hacked. But with the dropping of FBI’s lawsuit, the curtain has been pulled back on that one.
More than 30 years ago, Willis Ware, a computer engineer at the Rand Corp. and a member of the NSA’s scientific advisory board, revealed a secret to two young men who were writing a movie called War Games and wanted to know if their scenario—about a kid who hacks into the main computer of the North American Aerospace Defense Command—was plausible. “The only computer that’s completely secure,” Ware told them, “is a computer that no one can use.” It was a secret back then; it’s common knowledge today.