In an era when data breaches and cyberattacks are in the news daily, what does it take for a cybersecurity story to stand out and capture the public interest? Which recent threats have stuck in your memory? Heartbleed and Shellshock probably, perhaps also FREAK and POODLE. Cybersecurity experts know that branding matters. Give your newly discovered bug a scary name and a striking logo, and let the headline writers do the rest.
Last month, the widely used code repository site GitHub and GreatFire.org, which tracks information on Chinese Internet censorship, were hit with distributed denial of service, or DDoS, attacks linked to popular Chinese search engine Baidu and, by extension, the Chinese government. What was most striking about these attacks was not the choice of targets—China has long been wary of GitHub, and GreatFire is dedicated to publicizing details of blocked sites and searches in China—but rather China’s departure from its tried and true tools for internal Internet censorship. Instead of blocking visitors within China from connecting to the sites, the Chinese government appeared to have redirected some traffic intended for Chinese Baidu servers to bombard GitHub and GreatFire.
When a group of researchers at the University of Toronto, the University of California–Berkeley, and Princeton University set out to study how that happened, they knew they needed to christen the attack capabilities if they wanted to draw attention to what made this situation different. Last week, the researchers released a fascinating report detailing how traffic to Baidu was redirected to attack targeted sites. Its title? “China’s Great Cannon.”
The Great Cannon combines the trend of catchy names for newly discovered security threats with the enduring influence of cybersecurity metaphors. As a name, it’s both memorable and evocative. You can see why it beat out “a distinct attack tool that hijacks traffic to (or presumably from) individual IP addresses, and can arbitrarily replace unencrypted content as a man-in-the-middle” as the report’s title.
“We just wanted something simple, descriptive, to the point, and slightly related to the Great Firewall,” said Nicholas Weaver, the report’s co-author and the one who initially dubbed the Great Cannon capability. The Great Cannon (or GC, as it is called throughout the report) refers to the researchers’ findings that unencrypted Web traffic for Baidu.com could be selectively intercepted and redirected to new destinations, effectively bombarding targeted sites like GitHub and GreatFire with traffic that was originally intended for Baidu.
The authors of the new report attribute the Great Cannon capability to the Chinese government, offering as evidence the similarities between the Great Cannon and China’s “Great Firewall” censorship capabilities, including some similar source code and structure, plus their shared location. But they also go to some trouble to explain how the Great Firewall and the Great Cannon are different. While the Great Firewall can prevent Internet users from accessing certain content by injecting extra traffic that terminates their initial requests, it does not actually suppress those requests. This means that if you’re trying to visit a website like Facebook and you’re blocked by the Great Firewall, that website still receives your request to connect—and then, immediately receives another request to terminate that connection.
So the Great Firewall is hard to keep secret—websites that are blocked by it can see that they’re being blocked because of the odd pattern of connection attempts and termination requests. But the researchers found that when a connection to Baidu is intercepted by the Great Cannon, it doesn’t just inject new traffic to target that request to a different site like GitHub or GreatFire. It also drops the initial request to Baidu. So Baidu never receives those queries and has no way of knowing that its traffic is being tampered with.
In other words, the Great Cannon has the potential to be much stealthier than the Great Firewall, to mask its footprints much more effectively and stay hidden—unless, of course, it’s used to launch very high-profile attacks like those against GitHub and GreatFire. “I was surprised that they did something so visible,” Weaver said of the DDoS attacks employing the Great Cannon. “That [China] built this sort of capability does not surprise me, but that they were willing to use it so publicly does.”
It’s funny to think of a cannon as a quiet weapon. Of course, stealth was not the trait that dictated the authors’ choice of name—rather, the emphasis seems to have been primarily on reinforcing the idea that it was a weapon at all.
“Instead of being defensive like a firewall, we wanted to make it very clear this was an offensive weapon,” the report’s lead author, Bill Marczak, told me. Where the Great Firewall just tries to stop people from visiting certain sites, in other words, the Great Cannon actually attacks those sites directly.
The naming of the Great Cannon is about more than just branding. It also taps into a long tradition of metaphorical language that has shaped many of the ways we talk—and think—about computer security. References to walls and viruses are so deeply ingrained in our cybersecurity vocabulary that we’re barely even aware of their status, much less their evocative power, as metaphors. Other metaphors we invoke more consciously and deliberately, often to make complicated technical concepts more accessible.
I like metaphors—and I use them, especially when writing for a nontechnical audience—but I wonder if we sometimes fail to appreciate how powerful they are in framing cybersecurity issues. After all, every metaphor is limited in certain ways. A cannon is a tool for aggressive attack, but it’s not an object of stealth. It’s a helpful shorthand for explaining how the attacks against GreatFire and GitHub were launched, but it is much less evocative of the alternative use the report’s authors propose for this capability—delivering malware to machines that are communicating with unencrypted Chinese servers.
The Great Firewall, which so richly conjures the actual Great Wall and has become synonymous with Chinese online censorship, is also both useful and slightly misleading as a metaphor. It conveys the feeling of hitting a barricade as one tries to load a blocked website from within China—but it obscures the fact that that “blocking” mechanism is actually the result of injecting traffic. This is a recurring theme in cybersecurity: We select metaphors that emphasize the impact of threats rather than the mechanics of how they work.
These metaphors don’t just have implications for public perception of cybersecurity; they also have implications for how laws are applied to technologies like encryption and invoke very particular ideas about how we should defend against certain threats and about who should be responsible for defense. We rely on very different tools and people to protect us against cannons, viruses, and thieves.
In lots of ways the cannon is a great choice of metaphor for last week’s report—it helps call attention to an important security story and highlights some of the key points of a fairly technical piece of writing. Those are important roles for metaphors to play in cybersecurity debates, but it’s also worth remembering that these issues are actually very real and concrete—in other words, not metaphors at all.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.