Future Tense

What Heartbleed Taught the Tech World

Lesson 1: Just because code works for a long time does not mean it’s perfectly fine.

Sinking sailing dinghy sinking into Heartbleed code: buffer = OPENSSL_malloc(1 + 2 + payload + padding); bp = buffer;
Photo illustration by Lisa Larson-Walker. Photo by Johner Images/Walstrom/Susanne Creative/Getty Images

In 2014, a group of security researchers discovered one of the most widespread and potentially dangerous vulnerabilities ever identified in a system we rely on to secure our online communications. Dubbed Heartbleed, the vulnerability affected the popular open-source OpenSSL software used by many websites and other online applications to encrypt traffic sent to and from their users. The vulnerability affected millions of devices, especially Android phones, because OpenSSL was used so widely. As an open-source library, the OpenSSL code could be viewed—and potentially fixed—by anyone. But that also meant that no single company was responsible for maintaining and securing the code.

One of the primary lessons of Heartbleed was that open-source code like OpenSSL needed stronger institutional support for security, rather than just relying on volunteer efforts to find vulnerabilities. Now more than 5 years old, the Heartbleed vulnerability has been fixed in many places (a report from July identified just over 90,000 still unpatched), but its legacy offers lessons about our reliance on open-source software libraries and how we support the maintenance and security of these sorts of crucial, shared resources.

Heartbleed wasn’t the first serious open-source code vulnerability to be discovered, and it won’t be the last. Also in 2014, researchers discovered another vulnerability, dubbed Shellshock, in a different repository of open-source code, and the vulnerability was quickly exploited to launch thousands of attacks. More recently, the 2017 Equifax breach of 147 million people’s personal information was linked to a vulnerability in the open-source Apache Struts web application framework, though in that case the vulnerability had already been found and fixed—Equifax just hadn’t updated its systems.

Security remains a struggle for many open-source projects. Nevertheless, Heartbleed was a real turning point. Following the discovery of Heartbleed, the Linux Foundation together with several partners launched a new Core Infrastructure Initiative to help provide security services and other support for open-source projects. Several tech companies, including Amazon, Facebook, Google, and Microsoft, each pledged to donate $100,000 per year for at least three years to the initiative to fund the CII and its support for open-source code.

Part of what makes the Heartbleed vulnerability so striking—and part of the reason it led to so much soul-searching about how to do a better job with open-source code—is its simplicity. Back in 2014, DefenseStorm co-founder Sean Cassidy published a breakdown of the relevant code and the subsequent fix that remains the definitive analysis of the technical problem. But much of it comes down to a simple idea about computer memory that anyone can understand, even if you don’t know your segfaults from your bus errors, and a single line of code in the C programming language.

First, a quick lesson on code. (It gets a little technical, but not that technical.) C—unlike many other programming languages that are widely used today—requires you to manually manage a computer’s memory when you write code. That means using commands to instruct a computer when it needs to allocate space in memory for information and when it could release, or “free” up, that space again and let other data write over the information currently stored in it. Other languages, like Java and Python, take care of that for you.

This makes C a very powerful, terrifying, and often tedious language to program in—you can directly manipulate a machine’s memory, which can be a very useful and heady thing. But you also have to constantly worry about whether you’ve allocated and freed up memory correctly. When you screw up, one of two things can happen: The program won’t work the way you expect it to and you’ll go back to debugging to try to figure out where exactly you went wrong. Or, even more frightening, it will work perfectly and you’ll never realize you haven’t handled the memory management properly.

The latter is what happened in the case of Heartbleed. For a simple, entertaining explanation of the technicalities, there’s an xkcd cartoon titled “How the Heartbleed Bug Works.”

It’s important for people who don’t code to realize how common it is for a critical flaw to lurk, unnoticed, even as a program seems to work perfectly. Sure, many problems with code will prevent it from compiling and trigger lots of warning messages to the developer. But many more will simply go unnoticed as long as the people running the code aren’t doing anything too unusual or unexpected with it. That’s part of the reason that testing code on “edge” and on unusual cases is such an important part of debugging it—looking at what happens if a function receives a much larger input than expected, or no input at all, or an input right at the boundary, or edge, of the size it’s supposed to be able to handle. By definition, pretty much all vulnerabilities that make it out into the wild are the kind that don’t prevent code from working most of the time—they’re the sneaky kind that lie and wait until someone thinks of doing something the coder didn’t anticipate.

But if you aren’t in the industry, those technicalities are less important than the broader lessons they offer. For people like me, who stopped programming in C as soon as they finished the required coursework, the Heartbleed code can be a chilling reminder of how hard it is to work in languages that afford programmers the power and responsibility of allocating and freeing up computer memory. The fix is relatively straightforward. But remembering to think that way—to think like someone who would be deliberately trying to subvert the code, to think about the computer’s memory and how it works, and to do both of those things at the same time—is often the hard part.

It’s harder still when the program appears to be functioning perfectly—as OpenSSL did for many years—and when a project is being maintained through volunteer efforts rather than by a dedicated, full-time staff of trained software engineers. Heartbleed is a vulnerability that someone should have caught sooner; it’s the kind of memory management vulnerability that we were learning about in my sophomore-year systems-programming class, the kind we’ve known for decades to look for. The code that created Heartbleed is a poignant reminder of just how easy it can be to let small coding mistakes slip through the cracks when everything seems to be working fine, and just how massive the ramifications of those seemingly small mistakes can be.

This article is part of a series on the most consequential lines of code in history. Read about 36 bits of software that have changed the world.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.