Future Tense

Why We Should Care That Facebook Accidentally Deplatformed Hundreds of Users

It’s time to think about how social media companies should address their content moderation mistakes.

Facebook logo blurred on a screen
Olivier Douliery/Getty Images

This week, as part of the company’s efforts to cull “bad actors,” Facebook accidentally deplatformed hundreds of accounts. The victims? Anti-racist skinheads and members of ska, punk, and reggae communities—including artists of color. Some users even believed their accounts were suspended just for “liking” nonracist skinhead pages and punk fan pages. While Facebook has kept mum on the reasons behind the mistake, it seems likely, as OneZero reported, that the platform confused these subcultures with far-right, neo-Nazi skinheads.

It’s not exactly a hard mistake to make. The skinhead aesthetic has long been associated with white supremacist groups. (The Southern Poverty Law Center considers racist skinheads “among the most dangerous radical-right threats facing law enforcement today.”) But the first skinheads, who emerged in 1960s London, had nonracist and multicultural roots—they were influenced by Jamaican immigrants and shared close ties to the ska, reggae, and punk scenes. With the rise of neo-Nazi groups, the movement split in the ’70s and ’80s into the racist and nonracist factions that exist today. While the latter, which exist worldwide, have been trying to set the record straight for the past few decades, the default image of the skinhead as white supremacist remains.

Facebook’s confusion was not a terribly serious incident, and the platform was quick to smooth it over: A few days later, the accounts were reinstated, and the company apologized. Yet while it may have only caused minor inconvenience, the mistake illuminates the tensions at the heart of content moderation today. As we’re coming to accept the inevitability of mistakes in monitoring platforms of this scale, we’re left with a simple calculus: Either platforms can remove only the most dangerous content, leaving some harmful speech up, or they can cast a wide net and remove harmless speech in the process. If we favor the latter—and platforms have indeed been moving toward this—then we need to consider the extent to which mistakes are necessary, or excusable, in a social media company’s greater mission to wipe misinformation and hate speech from its platform.

The deplatforming incident comes as social media companies have increased their efforts to regulate content in response to the dual pressures of the presidential election and, especially, the coronavirus pandemic. Just last November, Facebook was criticized for refusing to ban white nationalists and other hate groups despite promises to do so. And while the company hasn’t exactly abandoned its laissez-faire approach to content moderation, Facebook, among other platforms, has culled and flagged misinformation, hate speech, and harmful content at unprecedented rates in the months since. Last week, for instance, Facebook removed nearly 200 accounts tied to white supremacist groups.

Anti-racist skinheads and musicians are just the latest victims of these policies. In April, for example, Facebook threatened to ban DIY mask-makers from posting or commenting and to delete groups coordinating volunteer efforts to craft them. (The automated content moderation system had confused volunteer posts with the sale of medical supplies.) This month, Facebook and Instagram unblocked the #sikh hashtag after sustained public pressure to reverse the three-month restriction. The reasons for the initial block were unclear—Instagram said it was a mistake due to “a report that was inaccurately reviewed by our teams”—but critics pointed out that it occurred during the 36th anniversary of Operation Blue Star, an Indian army assault of a Sikh temple that killed at least 400 civilians.

One of the main reasons for such mistakes is the increased reliance on artificial intelligence. Social media companies have used A.I. for years to monitor content, but at the start of the pandemic, they said they would rely on A.I. even more as human moderators were sent home, admitting that they “expect to make more mistakes” as a result. It was a rare moment of candor: “For years, these platforms have been touting A.I. tools as the panacea that’s going to fix all of content moderation,” said Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet & Society.

While A.I. allows large platforms to moderate content at a scale inaccessible to humans—and saves underpaid workers from further exposure to disturbing posts—its shortcomings are well documented. As Sarah T. Roberts, an assistant professor of information studies at UCLA, wrote in Slate earlier this year, A.I. tools are “overly broad in yielding hits, unable to make fine or nuanced decisions beyond what they have been expressly programmed for.” And many human rights and free speech organizations see the “bluntness of these tools as less of a mistake and more of an infringement on the right to create, access, and circulate information,” Roberts wrote.

The problem, of course, is that A.I. is here to stay—and that, partly as a consequence of this, we should expect to see many more mistakes on these platforms. But that doesn’t mean we should look at content moderation from a defeatist standpoint. “We need to start thinking about what kinds of mistakes we want platforms to make,” said Douek, who also mentioned that the conversation has been slow in catching up to this point “because people get uneasy talking about that kind of calculus in the context of speech rights.”

One way of approaching that question, Douek believes, is by considering the different kinds of speech that platforms regulate, and discerning which ones to prioritize over others. “I think that there are really different interests at stake when you’re talking about speech,” Douek said. “Harmful speech in the context of a pandemic, where the line between misinformation and physical harm is especially direct and urgent, is somewhat different to how to deal with political misinformation or falsehoods, where the best response may not necessarily be censorship, but may be other tools that platforms have at their disposal like fact checks and flags.” She added, “I would be hesitant to overlearn the lessons of the pandemic.”

Another step is to demand greater transparency from social media companies. Many people are concerned about algorithmic bias, for instance, but it’s still unclear whom that bias affects in content moderation. As Douek mentioned, conversations around anti-conservative bias, anti-leftist, and anti-racial bias are occurring simultaneously. The problem is that platforms are notoriously cagey about their data and algorithms. “We need to crack these platforms open and get independent researchers access to the data to start working out exactly what’s going on so that we can construct empirically based responses to it,” she said.

But that transparency extends beyond understanding how Facebook’s A.I. tools function. Just consider the case of the accounts associated with anti-racist skinheads: Although Facebook apologized, the company has yet to explain what went wrong. We don’t know, as Douek pointed out, whether this action was related to recent deplatformings; we don’t know if it was in response to an emergency; we don’t know if it was simply part of routine maintenance. For all we know, human content moderators could have misunderstood the relationship between the different subcultures.

Ideally, there needs to be a better system for understanding and challenging these kinds of mistakes. “At the moment, these platforms just don’t have sufficient appeals and error correction processes.” Douek said. If we’re finally going to accept that, in a post-pandemic world, many more errors will be made—and that they will have serious implications for speech—then, at the very least, we need to shift the conversation to how we want these platforms to address them.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.