Users

How Facebook Decides What Content to Remove

Facebook logo.
The company also announced that it will begin allowing users to appeal content removal decisions.
Oli Scarff/AFP/Getty Images

Facebook for the first time on Tuesday released the internal guidelines it uses to moderate content on the platform. The company also announced that it will begin allowing users to appeal content removal decisions.

Monika Bickert, Facebook’s vice president of global policy management, explained the decision to publish the guidelines in a press release:

First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines—and the decisions we make—over time.

Over the course of 27 pages, the policy document covers a wide range of offensive content, including hate speech, nudity, and terrorist propaganda. The guidelines themselves are fairly nuanced and specific, with a litany of exceptions and examples accompanying each prohibition. In the section addressing graphic violence, the guidelines refer to images of cannibalism and “visible internal organs” as particular markers of objectionable content. In the section addressing the promotion of crime, the guidelines prohibiting animal abuse content come with a list of exceptions such as hunting, fishing, food preparation, and religious sacrifice.

Some of the stipulations point to recent issues that Facebook and other platforms have had with misinformation and harassment in the wake of high-profile tragedies, such as the Parkland, Florida, shooting. The company clarifies in the section focusing on misinformation that Facebook won’t remove false news but rather stymie its distribution and lower its ranking in the news feed. There are also guidelines that ban “lying about being a victim of an event” and generally harassing survivors.

Users have long contested that Facebook’s enforcement policies are inconsistent and opaque, and moderators have had to reverse controversial decisions to remove content because of public uproar. In 2016, thousands of users posted the famous photograph of a naked 9-year-old girl fleeing from napalm during the Vietnam War after Facebook initially took down a Norwegian author’s post about it. Facebook soon restored the banned posts because of the photo’s historical significance. Shortly after the incident, the company took down a screenshot that Black Lives Matter activist Shaun King had posted of an email he received calling him the N-word. Facebook reversed that decision as well.

Scrutiny over content moderation has been magnified in recent months. When CEO Mark Zuckerberg testified before Congress in early April, multiple senators and House members questioned him about ads for opioids appearing on the platform, as well as content that may have incited racial violence in Myanmar. Republican lawmakers also grilled the CEO on allegations that moderators unfairly target conservative users of the site.

In fact, the House Judiciary Committee will hold a hearing on Thursday on “social media filtering and policing practices,” which will in part examine how particular viewpoints may face censorship on digital platforms. Among the guests invited to testify are Diamond and Silk, pro-Trump pundits whose videos were labeled “unsafe” on Facebook. Zuckerberg told Congress that the enforcement action was a mistake, though the incident further convinced many conservatives that social media companies discriminate against right-leaning content.

Facebook for its part says that it will be holding a series of public events around the world to get direct feedback from people on how to improve its moderation practices. Countries involved in the initiative include Germany, France, the U.K., India, Singapore, and the U.S.