More security, we’re generally inclined to believe, makes us more secure. The entire industry of computer security tools and services—from firewalls to authentication systems to password managers—is predicated on this notion that adding security to your computer systems makes them less vulnerable to attack or infiltration. But a bug disclosed last week serves as an important reminder that every new layer of security you add introduces new potential vulnerabilities, even as it may reduce or eliminate others.
This time, the culprit was Cloudflare Inc. Even if you’ve never heard of Cloudflare, odds are your online activity has passed through its servers: It handles traffic for popular services including Uber, Fitbit, 1Password, and OkCupid. Those companies, and many others, hire Cloudflare to help them ensure that their online traffic and servers are secure, reliable, and speedy. For instance, among other services, Cloudflare can help protect customers from denial-of-service attacks and configure SSL encryption for their websites.
So it was startling when Google security researcher Tavis Ormandy reported on Feb. 19 that he’d identified a bug that let him access what was supposed to be private web data, including passwords and encryption keys, from sites supported by Cloudflare. It seemed that some sites—we don’t know which ones—would accidentally also load the private information of other users. For most casual users it was hidden from view in the webpage’s source code, but easily accessible to anyone who cared to look for it.
The company has handled this admirably (more on that in a bit). Still, the bug has been around since at least September 2016. Cloudflare insists that there’s no evidence anyone has exploited it since then (it hasn’t found any of the leaked information posted on sites like Pastebin) and that it only affected a very small portion of the traffic Cloudflare handles (about 0.00003 percent during its period of greatest impact, according to Cloudflare’s calculations). But when it comes to security, it’s always best to assume the worst.
That means that you should do all the same old post-security scare things that everyone so deeply resents: changing passwords, activating two-factor authentication, logging out and back into mobile applications. Unfortunately, it’s made slightly more complicated in this case by the fact that you don’t have a Cloudflare account or password that you can just change the way you would your Yahoo password. (Incidentally, have you changed your Yahoo passwords since its breaches? Go do that.) Instead, since Cloudflare provides largely invisible but ubiquitous network infrastructure for many major web services, assuming the worst in this case means applying these steps to just about all of your online accounts. Sorry.
One of the hardest and most complicated parts of writing about security breaches or vulnerabilities is trying to characterize how big a deal they are. After all, most of you don’t really care about exactly what happened, and most of you probably aren’t going to change any of your passwords anyways—you just want some sense of where this falls on the spectrum from pop-up ads to Yahoo breach to Heartbleed to apocalypse. The tendency is always to tell you that this is (maybe!) the worst one yet. But the truth is—in this case as in many others, especially when it comes to vulnerabilities rather than breaches—that’s a very hard assessment to make because there’s so much we don’t know about this bug: whether it was exploited, and if so, by whom, or how many people were affected and in what ways. The best I can do is tell you, yes, potentially, it could be a very big deal. And also, potentially, it could have gone completely unnoticed for six months and be no big deal at all.
Beyond that, the fact that it originated from such a respected and popular security provider made for an uncomfortable moment of reckoning. What are we supposed to do when the people we’re paying to protect our networks are, in fact, introducing new vulnerabilities to our systems? Go find a new security company whose code doesn’t have any bugs? Good luck.
Cloudflare, to its credit, did the thing that too few tech companies are willing to do. It quickly published a full, detailed description of the bug and information about what had been done to remedy it. If you’re at all familiar with the programming language C, I highly recommend reading through it. If you’re not, you can settle for the Register’s punchy yet accurate description that the whole screw-up was due to someone coding a > in place of =. Essentially, when Cloudflare processed websites it had been hired to provide services for, it used a program that didn’t check whether it was using memory correctly. Therefore, in certain circumstances, it accidentally overwrote some adjacent memory it wasn’t supposed to be using. (This is also called a buffer overflow.) This, in turn, led to data being leaked where it shouldn’t have been.
Even if you don’t quite understand the more-than-3,000-word blog post written by Cloudflare Chief Technology Officer John Graham-Cumming, you should be at least a little reassured by the fact that it came out with more than the standard “We are working to resolve this problem. We continuously enhance our safeguards and security systems.” Cloudflare is potentially opening itself up to more criticism and mockery by describing exactly what happened (see, for instance, all the emphasis on the > and the =). But it’s also admitting its mistake and proclaiming it has nothing to hide. It’s inviting everyone else, including many people with the necessary technical expertise to understand what happened, to draw their own conclusions about how well it responded. And, for what it’s worth, once the company learned about what had happened, it responded fairly well. By comparison, in 2015, researcher Felix Wilhelm found some vulnerabilities in the services provided by another respected security firm, FireEye, which promptly went to court to request an injunction preventing Wilhelm from being able to disclose any information about his findings.
What this breach does illustrate—beyond the general cloud of uncertainty that hovers over any attempt to quantify or assess the seriousness of an individual vulnerability—is that every time we trust a company or tool or piece of software to make our computers more secure we also create a new potential vulnerability because we’re now relying on them, so if there’s any weakness in that company or tool or software (and there almost certainly is) that’s our weakness, too. That’s not a reason to avoid all security products and services but it is a reason to choose them carefully and not assume that more security always equals better security. New security protections create new vulnerabilities, and more computer security doesn’t necessarily make us more or less secure, it just makes us secure—and insecure—in slightly different ways.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.