Future Tense

Facebook’s VP of Global Policy Management on How the Platform Handled the U.S. Election

A Future Tense event recap.

The Facebook app logo is displayed and blurred on a screen.
Olivier Douliery/Getty Images

It’s the week after the U.S. presidential election, the man in the White House is refusing to acknowledge the results and trying to spread fake news about a supposed fraud, and yet the biggest headache of the day for Monika Bickert, Facebook’s vice president of global policy management, may be coming from … Austria.

Bickert’s team manages the rules for how people can use FB services, “what can they post, what can they advertise, across Facebook and Instagram.” On Thursday, she spoke with Jennifer Daskal, the event’s moderator and the director of the Tech, Law & Security Program at American University Washington College of Law, in a Free Speech Project conversation  setting rules, adjusting them as a result of feedback from the external community, and then enforcing them. It’s a gargantuan task, when you consider the billions of daily posts on the social media company’s sites, more than 90 percent of which are coming from beyond the U.S. and Canada.

Advertisement
Advertisement
Advertisement
Advertisement

But some of Bickert’s headaches (I’m admittedly projecting here; she didn’t use that term) might arise from what she considers misperceptions about how Facebook goes about ensuring that its services are “fundamentally about creating a place for expression” while also ensuring that they are “safe,” caring about values of “authenticity, privacy, and dignity.” In response to a question from an audience member about what the rest of us might not understand about Facebook, Bickert sighed and uttered a where-do-I-begin “Oh gosh.” She then listed three erroneous beliefs about the company’s content moderation: that if enough people report content as objectionable the company has to remove it; that Facebook doesn’t communicate or collaborate with other social media platforms on matters like child safety or terrorism, or with governments around elections; and that “my job, the crafting of our policies, is done by a group of folks in California who don’t know anything about the rest of the world.” In fact, she said, her team is spread across the globe, reflects a wide range of backgrounds, and engages every day with external groups and experts to inform their global policymaking.

Advertisement

But let’s get back to Austria. Just prior to this conversation Thursday, that country’s Supreme Court had upheld an order that would force Facebook to take down everywhere in the world a user’s defamatory comments about an Austrian politician. Facebook has been fighting the order for years, including an unsuccessful attempt to have the European Union Court of Justice block any member state from seeking to police speech beyond its borders.

Advertisement

With all due apologies to Slate’s Austrian readers, most of us have never heard of Eva Glawischnig, the politician in question. But the significance of the case is hard to overstate, opening the door to governments censoring speech elsewhere, and to a potentially scary future for free speech in which, as Daskal put it, “countries with the most restrictive norms get to set global rules.”

Advertisement
Advertisement

Bickert explained that social media platforms constantly engage with governments demanding removal of content for violating domestic laws. Facebook’s first step in these cases is to ascertain whether the content also violates its own policies—for example, by inciting violence. But there are plenty of instances in which content deemed illegal in a country doesn’t necessarily violate Facebook’s own standards (flag-burning, for instance). In these instances, Bickert explained that Facebook reviews the country’s laws, international speech norms, and the human rights implications of taking down the content, and depending on its evaluation, may push back on the takedown requests. Sometimes the company might prevail; other times the “dialogue is trickier.” But in any case, Bickert says, Facebook strives for transparency on these, through its government request reports that provide the public and human rights organizations insight into free speech trends in various countries.

Advertisement
Advertisement

But again, these are typically cases where Facebook is obligated to cede to national laws in taking down content within a country’s jurisdiction, not for all Facebook users worldwide. Bickert said the Austrian case, and a Canadian case involving Google, raise “serious concerns when we face governments ordering takedowns globally per their laws.” Imagine a world in which Americans or people in other countries can’t exercise their constitutionally protected speech—say, by criticizing a world leader—because of objections of the objections of another government.

Holocaust denial, interestingly, constituted a type of content that, while illegal in some countries, including Germany, France and Israel, did not violate Facebook standards until very recently. But in October, Facebook reversed its position, and has decided to bar Holocaust denial on its platform. When Daskal asked about this shift, Bickert said that Facebook’s hate speech standards “are based on attacking people based on a protected characteristic, not based on getting your facts wrong, or lying, or sharing misinformation about a particular group of people.” What changed over time, Bickert said, was the discernible connection between rising violent acts against Jewish people and places of worship, and the increase in Holocaust denial, not to mention a lack of awareness of this history. This change in policy underscores the degree to which Facebook considers its community standards a “living document,” one that Bickert said is always being refined and updated.

Advertisement
Advertisement
Advertisement

In preparing for the U.S. election, Bickert said Facebook was able to apply the lessons of more than 200 elections worldwide in the past several years. She noted that the company had devoted considerable efforts to increase transparency around political ads (and those behind them) and to be on the lookout for “inauthentic behavior” and attempts to suppress voter participation. Also, because it felt that political issue ads introduced at the 11th hour in a campaign season could not be independently fact-checked and debated properly, Facebook banned any such new ads in the days leading up to Nov. 3 (and subsequent to the vote). Bickert said that by doing so, the company was adopting a convention common in other countries of ceasing electioneering close to Election Day.

Advertisement

Some people are disappointed, she acknowledged, that FB is still not allowing social or political issue ads these days, especially with the pending Senate runoff votes in Georgia.  She added that Facebook would welcome regulation or government direction in the U.S. on these questions of when and how to permit political advertising around elections, and that it has supported such efforts as the Honest Ads Act introduced by Sen. Amy Klobuchar.

Advertisement
Advertisement

In such a polarized political moment, Bickert said, Facebook has been extra vigilant around election content because “we don’t want our site to be used to incite violence.” And as in the case of its reversal on Holocaust denial, the company seems to be applying a more expansive view of what might constitute a call to violence. According to Bickert, Facebook has been looking out for “not only calls to violence of the sort we have always gone after very hard to remove, but we’re also now removing what we call ‘militarized social movements’—organizations that might be more local, that are organizing with a call to bring weapons,” often with an eye toward intimidation (say, of vote counters or others exercising their free speech rights).

Advertisement
Advertisement

Daskal mentioned that the de-platforming of a fast-growing “Stop the Steal” Facebook group seemed a clear case in point of this vigilance, but asked whether the fact that these groups migrate to other platforms make the enforcement actions inconsequential, or, even worse, counterproductive since those other platforms might be harder to track and dangerous groups may end up with more room to maneuver in the shadows.

“This is the age-old question we have seen on so many issues,” Bickert responded, “including, looking back a few years ago, the rise of ISIS and the concern that if bigger companies became really good at stopping ISIS from using their services, it might all go to smaller services harder to monitor by law enforcement and researchers.” She added, “At the same time, my job is to make sure that our site is safe and that we are not a place where those sorts of events can be organized.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement