The Industry

Is Facebook’s New “Reputation Score” Actually Scary?

Here’s what we know about how it will work.

A picture taken in Paris on May 16, 2018 shows the logo of the social network Facebook on a broken screen of a mobile phone. (Photo by JOEL SAGET / AFP)        (Photo credit should read JOEL SAGET/AFP/Getty Images)
Facebook will always be gamed. JOEL SAGET/Getty Images

Two years of controversies have energized Facebook’s efforts to dampen the false and misleading information that can spread like wildfire on its platform, and the latest prong of this quest looks beyond examining the content that users flag as problematic. As first reported by the Washington Post, Facebook is now going to examine the reliability of the users who do the flagging.

Facebook is now assigning a score, on a scale of zero to 1, that ranks each user’s reliability as content flaggers. Facebook started putting more of the onus on users to report posts and links shared as false or harmful in 2015, and predictably, not everyone in Facebook’s community of more than 2 billion has used the system in good faith. A form of trolling has gained steam on Facebook in which people with a particular agenda are using the company’s content-reporting tools to falsely flag stories that aren’t actually untrue. If a Trump fan thinks the New York Times is a vehicle for fake news, that user might flag everything he or she sees from the paper. It might not even be publication-specific. If a Black Lives Matter activist is repeatedly posting content about racial injustice, that user may inspire excessive flagging on their links from, say, white supremacists hoping to silence them.

Advertisement
Advertisement
Advertisement
Advertisement

Facebook isn’t sharing much about how the new reputation system will work, likely in order to prevent malicious actors from further trying to game the system and keep their reputation score high. Though the idea of being given a reputation ranking by Facebook might feel worrisome—particularly if it’s ever used for any other part of Facebook’s services—the company stressed that any scoring of a users’ reputation is part of a constellation of thousands of signals it looks for in its content-moderation efforts. And it could actually help triage the complaints that come in, which would allow Facebook to address seriously harmful or false news content faster. If a user has regularly flagged content that, when fact-checked, does indeed appear to be a false news story, that user’s reporting might be looked at faster than someone who has repeatedly flagged content that has checked out as true.

Advertisement
Advertisement

After the Washington Post story was published, some critics suggested that the initiative could be a worrisome prelude to a private-sector social credit system. A Facebook representative pushed back on that interpretation in an email to Slate, refuting the notion that there is a “centralized ‘reputation’ score for people that use Facebook.” Rather, the spokesperson said Facebook “developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”

Advertisement

The score that Facebook assigns users who flag posts isn’t permanent, the company stressed to me, but rather it’s calculated using machine-learning algorithms that adjusts the score over time. It’s not clear if a score will ever be totally erased if a person doesn’t flag a post for a long time. The score does not affect other ways people use Facebook—so if a person has a bad rating for repeatedly flagging articles as false when they aren’t, that doesn’t mean the links that user posts will be demoted in the news feed. This score also doesn’t account for flagging posts as harassment; it’s just about false news.

Advertisement
Advertisement
Advertisement

Although Facebook is trying to protect its system from being gamed by ideologically motivated actors now, it’s not clear that the system of putting the onus on users to flag false content ever worked that well to begin with. The activist group Sleeping Giants, for example, recently called on users to cause an uproar over Facebook’s decision to allow Alex Jones to stay on the platform, which spurred a deluge of users to report Jones’ posts as false and harmful. Jones was eventually removed from Facebook, but, according to the Post, executives at Facebook questioned whether the flood of reports of Jones’ content was an effort to game the reporting system, too. But the fact that so many users had to take mass action before Facebook removed Jones’ content calls into question whether previous reports were ever taken seriously before, since Jones has been using his Facebook platform to spread harmful conspiracy theories for years—like the false allegation that the 2012 Sandy Hook massacre was a hoax. That charge became so popular among Jones’ adherents that some families of victims report that they’ve had to move houses to avoid harassment and are unable to visit the gravesites of their murdered children. Facebook didn’t appear to take serious action to remove Jones’ content until just this month.

Advertisement
Advertisement
Advertisement
Advertisement

Facebook is huge, and there certainly needs to be a reporting mechanism of some kind for users to bring attention to false or harmful content. Finding a way to sort out those good-faith requests from bad ones makes sense, especially if some users are abusing the reporting tool. But as with any platform that allows users to post in real time, moderating that content is always going to be a game of cat and mouse. A platform as big as Facebook—where billions gather to share things about the world—is a honeypot for people trying to get their narrative, false or otherwise, to a big audience. And on the internet, the bigger the audience, the more money you can make, and the more good or harm you can do. No matter what Facebook does, someone will always try to exploit and abuse that system.

Advertisement