Where to Draw the Line on Deplatforming

Facebook and YouTube were right to delete the video shot by the New Zealand shooter. Internet providers were wrong to try to do it, too.

A series of ethernet cables.
Thomas Jensen/Unsplash

After a shooter livestreamed himself killing 50 Muslim worshippers in New Zealand earlier this month, one of the places where footage of his broadcast lived on was 8chan—the same shadowy message board where he posted a manifesto and chillingly called his actions “a real life effort post.” While mainstream platforms like Facebook and YouTube mobilized to take down uploads of the video as they proliferated thousands, even millions, of times, 8chan left it up. So did its cousin 4chan (whose /pol/ board is a similar magnet for far-right trolls) as well as sites like the social network Voat, the video-hosting site LiveLeak, and the blog Zero Hedge. Because these places were not willing to remove the videos, New Zealand and Australia’s major internet service providers decided to take action: They blocked access to any website continuing to host the video. As of this week, 8chan appeared to still be blocked in New Zealand, but 4chan and Voat were again accessible, meaning user access to those sites had likely been restored.

It might seem obvious that these companies ought to block access to a video containing an act of horrific violence by whatever means possible. But the way it happened marks an unusual and worrisome moment. As a general principle, internet service providers aren’t supposed to erect barriers between the users they serve and the websites those users want to visit. They tend to observe this rule even in places like Australia and New Zealand that don’t have net neutrality policies that prevent ISPs from blocking access to websites. An exception tends to be when those takedowns come at the behest of law enforcement, perhaps out of concern for public safety. But the telecoms companies in New Zealand and Australia didn’t decide to kick these websites offline in collaboration with law enforcement. Rather, they felt that the blockages were simply the responsible thing to do. “We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content,” executives from Vodafone NZ, Spark, and 2degrees wrote in a joint statement.

What should be done about 4chan, 8chan, and other awful internet places whose ugliness spills into public view? It’s a question Americans have confronted as recently as last week, when public schools in Charlottesville, Virginia, shut down in response to an anonymous, invective-filled post on 4chan that threatened “ethnic cleansing” at a local high school. Other 4chan users encouraged the action, taunting, “School shooting tomorrow.” Police eventually found a 17-year-old whom they say wrote the post. On Gab, another social media site that attracts racist “free speech” enthusiasts, there is a policy against explicit calls for violence but no rules against hate speech, and so it flourished there. The Pittsburgh synagogue shooter was active on Gab, where he posted anti-Semitic missives and announced, “I’m going in” before killing 11 people during Saturday morning services in October.

A loose movement to push back against these spaces of hate has emerged since the 2016 election. Users of Twitter, Facebook, and YouTube have demanded the companies actually enforce their policies against hate speech—which they have tried to do, with varying degrees of enthusiasm and success. Elsewhere, some online services have deplatformed places that explicitly welcome hate speech (such as neo-Nazi havens the Daily Stormer and Stormfront), refusing to continue providing web hosting and security services. These are private firms, and few would ask governments to crack down on 8chan or its ilk—most of the time, governments should stay away from policing speech at all. But the example of New Zealand and Australia may offer a tempting place to turn instead. Internet providers appear to be one group that has the power to make these sites inaccessible. That doesn’t mean they should.

While it’s refreshing to see technology companies act swiftly to protect their users, in this case it’s also unsettling. Internet providers operate at a layer above websites, users, and even many infrastructural parts of the web. Many may argue that Facebook shouldn’t referee speech at all, but it’s clear to everyone that Facebook is within its rights to decide what conversations happen there. That’s why so many now make the persuasive case that it can do a better job moderating those conversations—which isn’t censorship. But it is censorship when ISPs, which are merely gateways to those conversations, try to take on hate speech or other content themselves. We don’t want ISPs making those calls.

Don’t confuse ISP blocks with other kinds of deplatforming. After the 2017 Unite the Right rally in Charlottesville, internet service companies like Cloudflare, Google, and GoDaddy stopped providing hosting and security to the Daily Stormer, a prominent neo-Nazi website where the event was organized. In this version of deplatforming, one or multiple companies decide to terminate their business relationship with another company, often resulting in an effective removal from the mainstream internet. This may happen because one firm’s terms of service were violated, like when a hosting company prohibits hate speech or when Facebook and YouTube removed Alex Jones’ channels last year. Or in the case of Cloudflare refusing service to the Daily Stormer, it was because, as CEO Matthew Prince put it, “I woke up this morning in a bad mood and decided to kick them off the Internet. … It was a decision I could make because I’m the CEO of a major Internet infrastructure company.” But even this capricious-sound case is less a problem than an ISP taking action. The Daily Stormer was a Cloudflare customer and Prince decided to stop working with it. And while he’s right that Cloudflare provides infrastructural services for websites, it’s not an internet provider. It doesn’t run the tubes that the internet travels on.

It’s troubling when a company with concentrated power decides to stop doing business with a website, thereby curbing its reach. But it’s still that company’s choice to decide whom it does business with and how. The situation is different when that company doesn’t have a direct business relationship with websites but rather controls the lanes that deliver those websites to consumers.

It also might not work. “[An ISP] blocking 4chan or 8chan ignores the fact that many of the users of these sites are sophisticated enough to have access to VPNs and other ways of evading this censorship,” said Ethan Zuckerman, director of the Center for Civic Media at MIT. It’s possible, for example, to access content blocked in your country by using a virtual private network, which allows users to route their internet access through servers in other locations. “Communities like 4chan and 8chan include many people interested in accessing forbidden content, violating national copyright restrictions,” Zuckerman said. “It’s hard to think of a community where a technical means of blocking access is less likely to work.”

Censorship also calls attention to what’s being censored, since people will probably be curious about what was blocked—but in this case without a good reason steeped in company policy. And when an internet provider blocks a website full of hate, users of that site can cry censorship, politicizing and potentially strengthening their community. “Blocking like this allows people to say, ‘Our speech is being censored,’ and therefore you are riling up a community who can go elsewhere on the web, and then they’re connecting and congregating around a ‘We’ve had our voices silenced’ line. And then there becomes kind of a victim narrative here that can act as a recruiting mechanism,” said Claire Wardle, a TED fellow and executive director of First Draft, an organization that helps journalists and researchers find and study disinformation. These users would have a point. As disturbing as parts of 4chan and 8chan are, plenty of corners on those sites aren’t used for hate. 4chan has thriving message boards of anime fans, video gamers, pornography, and advice topics. Blocking an entire website silences those groups, too, that may not agree with the hate groups on 4chan but would oppose blocking of the site. This is the case with any kind of blunt blocking—more people will be affected than are at fault.

The internet service providers’ blocking appears to have gone down outside of a specific policy around these kinds of crisis situations. Vodafone told me that it was unblocking websites once content from the Christchurch shooting had been removed, but it wouldn’t say which sites were blocked so as not to call further attention to those websites. The whole situation appears to be slightly, if not largely, ad hoc, said Rebecca MacKinnon, director of Ranking Digital Rights, a project that tracks how internet companies around the world protect freedom of expression and user privacy. “There’s no transparency about their policy for this kind of emergency situation,” MacKinnon told me. “They just kind of ad hoc decided this and didn’t appear to have a prior policy for what they might do in serious emergency and exceptional situations.” To block parts of the web so opaquely sets a troubling precedent. We may agree with what internet providers are blocking now, since we all agree a shooter’s footage of his violent attack shouldn’t spread. But in the future, the lines might not be as clear. In 2005, for example, the Canadian telecom Telus blocked access to a communication workers union website that promoted a labor strike against the internet provider. This is why net neutrality has become such an important principle: Internet providers shouldn’t be able to decide what users can and cannot see without oversight.

The Christchurch video is horrific. Platforms—especially massive, popular ones that attract hundreds of millions of users—should do everything they can to keep their communities safe. But the overly broad blocking of entire websites by internet providers, which operate at several layers above the platforms, isn’t going to make the horror disappear. It could strengthen these communities—and assign unnecessary powers to companies that no one asked to do the dirty work.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.