The Industry

Why Facebook’s Latest Ban Was So Underwhelming

Because the deplatforming was opaque, ostensibly limited, and staged like a spectacle.

Alex Jones, Milo Yiannopoulos, and Laura Loomer.
Alex Jones, Milo Yiannopoulos, and Laura Loomer.
Photo illustration by Slate. Photos by Brooks Kraft/Getty Images, Michael Masters/Getty Images, and John Lamparski/Getty Images.

On Thursday, we found out what the sound of a defenestrated troll is like. That afternoon, Facebook banned Infowars, Alex Jones, Paul Joseph Watson, and other inflammatory figures like far-right personalities Laura Loomer and Milo Yiannopoulos, white supremacist politician Paul Nehlen, and Nation of Islam leader Louis Farrakhan, who has long been criticized as holding anti-Semitic and homophobic views. These bans are reportedly permanent and extend to the fan pages and groups affiliated with their accounts.

The breakup wasn’t clean. The news broke before Facebook had actually banned all of their accounts across its platforms. Loomer and Yiannopoulos were still able to post to Instagram for nearly an hour after the Washington Post, the Atlantic, CNN, and the Verge published stories saying they were getting the boot. In that time, Loomer and Yiannopoulos used their accounts to tell their legions of followers where else to find them. On Facebook, Alex Jones was able to stream on Facebook Live for nearly two hours after the world learned that he was technically no longer welcome there. Facebook told Wired the reason for the time lag was that scrubbing these characters’ footprint was a bigger job than they anticipated.

While Facebook briefed news organizations ahead of time over these actions, the company didn’t specify how these accounts had violated the platform’s policies. Instead, a spokesperson told multiple outlets that the company has “always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology,” which was a bit tough to swallow, considering these accounts have been spewing hate for years—and many, many hateful accounts remain on the social network. (A quick search Friday on Facebook for the term “jews oven” unearthed a page called “Jewsinoven?”) The Facebook spokesperson continued, “The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.” Facebook didn’t share what rules specifically were violated or what the process was for reviewing its rules. Presumably, if Thursday’s actions reflect a new approach that Facebook is now taking—or at least a new sense of urgency—then far more than seven accounts would have been banned.

Still: At the end of the day, a bunch of high-profile bigots had been stripped of a major platform. It should’ve resonated as a victory against the fringe figures who have benefited from the distortionary effects of social media, where ranking algorithms tend to benefit divisive, emotional content. So why did this latest act of content moderation instead feel underwhelming?

Deplatforming certainly does help to reduce the spread of hate. Since Alex Jones lost his main Facebook and YouTube pages in August, traffic to Infowars has plummeted. Milo Yiannopoulos, a far-right provocateur who was banned from Twitter for directing racist harassment at the actress Leslie Jones, can no longer receive financial backing from his fan base via Venmo or PayPal and is reportedly in severe debt. (Those services banned him last year after he sent $14.88, a number that symbolizes a salute to Hitler in neo-Nazi communities, to a Jewish journalist.) But, particularly in Facebook’s case, deplatforming also has to align with a set of clearly articulated policies so that it isn’t read as a tyrannical act of corporate censorship that will further inflame accusations of bias. In this case, Facebook created a news story in much the way it might if it had announced a new product, but it didn’t actually say why specifically the accounts were removed. What should have been a by-the-book punitive act became a spectacle—and probably one that Alex Jones and the like will try to spin to their advantage. Facebook has the power to punish wrongdoers, as it did on Thursday. But we don’t know its full rationale for doing so, nor do we know who will be next.

The lack of transparency is so troublesome because Facebook’s content moderation processes aren’t only applied to famous racists. For years, black users on Facebook have been forced to navigate the platform’s mercurial enforcement of its speech policies. It’s become so routine for black activists to get suspended when they complain about racism that it’s become common practice in activist communities to create backup accounts and use slang like “wypipo” to dodge the company’s content moderation algorithm. Complaining about racism isn’t hate speech. But Facebook appears to have done less hand-wringing when moderating content from this community than it has with content that is anti-Muslim, anti-Semitic, or racist, or that promotes dangerous conspiracy theories that have led to violence. While figures like Alex Jones might attract the attention of higher-up Facebook executives, most people are moderated by a mix of algorithm and low-level contract workers—and are subject to a broad brush with little room for appeal.

I emailed Facebook to ask specifically which rules were violated, what the process was for reviewing the rules, and if this means more accounts, presumably of lesser-known users, would be banned for engaging in hateful rhetoric soon, too. I have yet to hear back. But unless this move is part of an overall cleanup effort in which the company includes its rationale for taking action and promises to do so consistently into the future, don’t expect Facebook to become free of bigotry anytime soon. Removing hate will always be a game of whack-a-mole. It’s good to ban high-profile bigots. It’s also critically important to explain in clear terms what policy was violated, how many violations were tabulated, and what they did to violate the policy—either shared with the account holder or with the public. Simply saying the company “always” does this isn’t sensical or sufficient.

What might be more bothersome, however, is that Facebook risks unleashing a whole other breed of hate and disinformation across its network—one that a high-profile act of deplatforming doesn’t address.

Earlier this week, Facebook shared that it is redesigning its platform to promote the use of private groups for sharing Facebook posts, which would reduce the prominence of the more open news feed. Moving people into private rooms will certainly make it a lot easier for Facebook to continue its haphazard style of governance. It’s a lot easier to promote and share bigotry in a closed group of racists than it is to do so on a public page—and for that bigotry to spread widely without anyone noticing, as it did on WhatsApp during the Brazilian elections last year. And it’s a lot harder for users who are trying to fight hate to report it. I expect the people who lost their accounts on Thursday to start new ones soon, or worse, commandeer an account or group with a large following from an ally. Sure, they probably won’t have the reach they did before, but hate is insidious. Policies against racism don’t eradicate racism. Unless Facebook applies its rules consistently and transparently, people with an agenda will find a way to come crawling back to find their fans. And if they’re in big private groups, where only their fellow sexists, anti-Semites, Islamophobes, homophobes, and racists are allowed in, they may well find a hideout there too.