Facebook on Wednesday announced a new policy for removing misinformation, including altered imagery, from the platform that is intended to cause or exacerbate violence. Misleading and inaccurate information spread through the social media site has been linked to violence in Myanmar and Sri Lanka. U.N. investigators reported that extremist Buddhists in Myanmar used Facebook to spread public messages in their ethnic cleansing campaign against the Rohingya Muslim minority. Extremist Buddhists in Sri Lanka have also taken to Facebook to post misinformation denigrating the Muslim minority, which has led to communal attacks.
A Facebook spokesperson wrote in a statement, “Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community. There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.”
Facebook is beginning to work with local organizations and international partners in foreign countries, who can often more quickly detect misinformation campaigns on the ground, to help it enforce the policy and conduct investigations. These partners, which include civil society organizations and threat intelligence agencies, are encouraged to reach out to the platform’s moderators with disingenuous content, along with proof that the information is false. The moderators must also determine whether there is an imminent threat of violence stemming from the misinformation. If all the criteria are met, the moderators will room the content from the platform. Facebook also plans to use media-matching technology to help identify potentially dangerous misinformation.
Facebook claims that it implemented this policy last month in Sri Lanka, where users were posting allegations that Muslims were poisoning the Buddhist majority’s food. A local partner informed moderators that the content had the potential to spur violence. Even though there were no signs anything had occurred as a result, the moderators nevertheless decided to take down the posts because there were previous cases in which similar content had been associated with violence in the country.
The new policy comes the same day that CEO Mark Zuckerberg landed in hot water for saying that Holocaust deniers who spread misinformation on the platform are not “intentionally getting it wrong” during an interview with Recode’s Kara Swisher. He later clarified, “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.”
Support our journalism
Help us continue covering the news and issues important to you—and get ad-free podcasts and bonus segments, members-only content, and other great benefits.Join Slate Plus