The Industry

Bigotry Has No Innocent Intent

If Facebook isn’t willing to ban even “misinformed” Holocaust deniers, it doesn’t understand how hate works online.

Mark Zuckerberg looking at his phone.
Mark Zuckerberg, presumably discerning someone’s intent. Drew Angerer/Getty Images

Facebook has put a couple of months and large sums in advertising dollars into a grand mea culpa over its data policies and its role in promoting fake news during 2016 election and after—an effort that seemed to derail last week when a Facebook executive explained to a gathering of reporters why the company had not banned the right-wing conspiracy site Infowars, and which went up in flames on Wednesday when CEO Mark Zuckerberg suggested he would not even boot Holocaust deniers from the platform. In both cases, the company implicitly or explicitly drew a line between intentionally hateful or ill-intentioned content and misinformation—and said one was worthy of removal but the other was not. The moments suggested that Facebook still hasn’t reckoned with the ways promoters of hate often achieve their ends online, nor grappled with nearly enough seriousness what to do about it.

Advertisement
Advertisement
Advertisement
Advertisement

This was clear on Tuesday, when Facebook’s president of global policy management, Monika Bickert, defended the site’s content-filtering before the House Judiciary Committee and was quizzed by lawmakers on how the site handles pages for outlets like Infowars. When asked by Rep. Ted Deutch, “How many strikes does a conspiracy theorist who attacks grieving parents and student survivors of mass shootings get?”, Bickert explained that while individual Infowars posts found to have violated Facebook’s terms of service have been taken down, the Infowars page that published those posts had not merited expulsion. “If they posted sufficient content that violated our threshold, the page would come down,” Bickert said. “That threshold varies depending on the severity of different types of violations.”

Advertisement

The actual parameters of that threshold weren’t made clear during the hearing. But on Wednesday, Motherboard published internal Facebook documents it had obtained that shine light on the specifics of the social network’s moderation policy, at least when it comes to hate speech. Facebook’s umpires use a strike system—five instances of “hate speech” within a 90-day period or five examples of “hate propaganda, photos of the user present with another identifiable leader, or other related violations” can earn a page a takedown. Pages can also be deleted if at least 30 percent of the content posted to them within a 90-day period by nonadministrators violates the site’s community standards. On Wednesday evening, Facebook announced a new policy on misinformation apparently aimed at its international users, clarifying that while it has a goal of reducing the “distribution of misinformation” rather than removing it, it is implementing a practice of removing misinformation from the platform that leads to physical harm, as it has in Myanmar. (It is surely a good thing that Facebook plans to work with international partners and organizations on the ground in cases where misinformation stokes violence.)

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

But none of this really clarifies how Facebook goes about deciding what is or isn’t hate speech—which both differs from and, crucially, can overlap with misinformation—and why a multiple strike system is necessary in the first place. Under the thresholds defined, pages that publish extreme racism only four times within 90 days are, as far as Facebook’s concerned, welcome parts of the community.

If the Motherboard revelation left some nagging questions, Zuckerberg’s interview with Recode created a full-scale PR crisis. In the conversation published Wednesday, he explained the platform’s content-filtering principles in free-speech terms and referenced how the site might handle Holocaust denial:

So I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong — I don’t think that they’re intentionally getting it wrong. It’s hard to impugn intent and to understand the intent. I just think as important as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public leaders who we respect do, too. I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times.

Advertisement
Advertisement
Advertisement
Advertisement

Later in the day, Zuckerberg walked back his comments. But his statement in the moment was remarkable for two reasons. First, Holocaust denial is promoted, in large part, intentionally and maliciously by people working to advance anti-Semitism through misinformation, and it is shocking Zuckerberg suggested otherwise, even if haphazardly. Secondly, intent seems like an obviously flawed standard for judgement as to whether misleading content should be allowed on the site. Almost half of American adults get at least some of their news from Facebook. It is evidently fine, to Zuckerberg, if conspiratorial and harmful nonsense comes up on their feeds as long as it’s misintentioned and put there by people who do not know what they’re talking about—even if that content could have violent consequences as Holocaust denial, which feeds broader anti-Semitism, obviously could. Moreover, a standard of judging presumed intent rather than the actual content of posts itself might allow Holocaust deniers and the distributors of other hateful content plausible deniability. If they write their posts in such a way that it seems like they’re innocently misinformed or “just asking questions,” Facebook might give them a pass.

Advertisement
Advertisement

Elsewhere in his Recode interview, Zuckerberg references, naturally, ”free speech.” “I think a lot of the content that’s at play is terrible,” he said. “I think when you get into discussions around free speech, you’re often talking at the margins of content that is terrible and what should …but defending people’s right to say things even if they can be bad.” But Facebook is not an arm of the American government; it’s a private corporation. It has no free-speech obligations whatsoever. It can be as open or as restrictive as the people running it please. And in plenty of instances, the company has proven willing to impose heavy-handed restrictions on perfectly innocuous content. Nudity, for instance, is so strictly regulated on the site that users began protesting the removal of breastfeeding photos a few years back. In September 2016, a post by the Norwegian prime minister of the famous “Napalm Girl” photo—an iconic image of the Vietnam War—was taken down for violating community standards. Two months later, it was revealed that the site had experimented with a censorship tool aimed at making the site acceptable to the Chinese government.

Advertisement
Advertisement

Zuckerberg’s choice to tolerate some level of bigoted or misleading or otherwise controversial speech remains precisely that—a choice. And one made in the service of fulfilling the one obligation Facebook sincerely feels that it does have: to make money by keeping the site as open to as many types of users—conspiracy theorists and anti-Semites included—as its average users and the broader public are willing to stand. In places like Myanmar, Facebook appears to be getting serious about judging content according to its consequences, since those consequences have already been so dire. But a content-filtering framework that defaults to ascertaining intent—one designed to let some users get away with a series of hateful or misleading posts—both allows Facebook to claim it has a real moderation framework in place and leaves the door to Facebook’s community wide open to Holocaust deniers and mass-shooting truthers alike. Especially if they’re “just asking questions.”

Advertisement