The Industry

Facebook Is Pretty Good at Catching Nudity and Trolls. It’s Still Struggling to Stop Hate Speech.

TOPSHOT - A lit sign is seen at the entrance to Facebook's corporate headquarters location in Menlo Park, California on March 21, 2018. 
        Facebook chief Mark Zuckerberg vowed on March 21 to 'step up' to fix problems at the social media giant, as it fights a snowballing scandal over the hijacking of personal data from millions of its users. / AFP PHOTO / JOSH EDELSON        (Photo credit should read JOSH EDELSON/AFP/Getty Images)
Thumbs up for transparency.
JOSH EDELSON/Getty Images

Facebook removes tons of content from its platform for violating its rules, but it generally doesn’t reveal much about that content’s nature or quantity. On Tuesday, however, the social network pulled back the curtain with a new report on how it enforces its content-removal policies. The report, which covers the period between October 2017 and March 2018, reveals that Facebook saw an uptick in posts containing graphic violence and posts with sexual content and adult nudity in the first quarter of 2018 compared to the last three months of 2017. The company says it has been working to improve its ability to remove and respond to various categories of worrisome posts that pollute its platform, like spam, hate speech, posts from fake accounts, and posts that promote terrorism. In the report, Facebook says it estimates “that fake accounts represented approximately 3% to 4% of monthly active users” during the six months covered. That’s no small amount—and it suggests you very well may have interacted with a few fake friends or pages, granting them some access to your profile data in the process.

The increase in reported posts containing graphic violence, adult nudity, and sexual activity on Facebook was slight. In the case of graphic violence, the change likely had to do with an uptick in the volume of that content, but in the case of nudity and sexually explicit content, Facebook notes that the jump is small enough that it might fall within its margin of error for measuring such activity. Facebook said it can’t estimate how many posts are on the platform that contain terrorist propaganda, nor does it yet have a reliable metric for measuring the prevalence of hate speech or spam.

Thanks to its artificially intelligent detection system, Facebook says it has been able to catch about 86 percent of the graphically violent content on the platform before it is reported by users as well as about 96 percent of posts with nudity and sexual activity before it’s reported in the first quarter of 2018. But when it comes to hate speech, the company hasn’t been as successful. It was only able to catch about 38 percent of that content, a problem that the U.N. noted in March has helped to fuel violence in Myanmar, where Facebook pages have helped to incite violence against Rohingya Muslims amid what has been called “a textbook example of ethnic cleansing.” Finding hate speech, though, isn’t easy with software automation, since it often requires a nuanced understanding of the culture in question. For example, a person using a derogatory term to describe a group they’re a member of isn’t the same as someone else using that term.

These numbers come as Facebook is hustling to be more transparent in the wake of the Cambridge Analytica scandal and a year of revelations about how Russian agents instrumentalized the platform to manipulate voters in the runup to the 2016 election and the months that followed. Though its content-removal report doesn’t touch on the years of porous data-sharing policies that allowed for the Trump campaign’s voter-targeting firm to end up with the private data of more than 87 million Facebook users, it does get into some of the ways the company is working to remove content from fake accounts. It was through the use of fake accounts that Kremlin-backed agents from the Internet Research Agency posted thousands of memes, videos, and ads intended to confuse voters and rile up Americans on some of the most politically polarizing issues in the country, like gun control, immigration, religion, and racism.

In the first quarter of 2018, Facebook removed 583 million fake accounts, down from the 694 million fake accounts that were purged in the fourth quarter of 2017. According to Facebook, the number of fake accounts on the platform at any given time in large part reflects cyberattacks taking place on the network; in these cases, malicious actors attempt to create a trove of fake accounts at once using scripts or bots in order to spam or deceive users. Still, Facebook says that it’s been able to find about 99 percent of fake accounts using its automated detection software before users flag them.

Facebook’s effort to share data is voluntary, and it follows the company’s move at the end of April to release its guidelines for removing content. But increasingly, lawmakers are warming up to the idea of legislation that would require Facebook to be more transparent beyond voluntary disclosures. One option is the Honest Ads Act, which would require Facebook and its peers to disclose when political ads are bought on their platforms and who paid for them. There’s also a bill that was introduced last month that would require Facebook and other internet companies to provide users with a copy of the data that’s being collected on them, free of charge, as well as a list of who has had access to their data, either through a sale or simply by it being made available. Both bills have limited but bipartisan support in the Senate.