On Monday morning, I became one of the trial users of a potential new Facebook feature, one of its recent attempts at fighting the fake news and unconstructive debate rife on its platform: upvotes and downvotes.
After an initial downvote trial among select Facebook users in February, the Reddit-style, crowdsourced comment ranking system is being further tested in Australia and New Zealand (I just returned from the former, and Facebook seems to think I’m still there), this time with upvotes, too. Little gray arrows—one up, one down—have appeared beneath comments on posts from select public pages, asking for my input. “Stop bad comments,” they implore. “Press the down arrow if a comment has bad intentions or is disrespectful.” “Your input is anonymous,” a pop-up adds. The purpose of the feature, as Facebook told Mashable, is to allow users “to push those thoughtful and engaging comments to the top of the discussion thread, and to move down the ones that are simply attacks or filled with profanity.” The voting options don’t show up on posts from friends—only on pages. So this is not an opportunity to disagree with a family member without experiencing blowback.
Even before I entered the trial, the system’s problems seemed glaring, a tool waiting to be abused. With its anonymity and reliance on mob rule, it’s the perfect feature for trolls and bots, lefties and conservatives—the whole menagerie of internet creatures—to silence opinions through effective organizing and well-policed echo chambers. Staring at the new scores, the orange -1s and -2s on perfectly well-intentioned comments, I could already see the impending vote-stacking wars—battles between upvoting and downvoting armies, gamified debate—unfolding before my eyes. This was just the beginning.
While scouring to see which of my pages the pilot applied to—very public ones, including BuzzFeed Oz Politics, the Australian Labor Party (but not the conservative Liberal Party of Australia), the University of Melbourne, and Australian Marriage Equality—I noticed some lazy, unconstructive comments on a video from the Australian Broadcasting Corp., Australia’s public broadcaster. They were standard comments, really, calling the clip (a politically mixed panel talking about the livability of $40 per day welfare payments) “lefty BS” and a “leftist echo chamber.” No one had upvoted or downvoted these comments: They were unrated content, like fresh cement, waiting for a plus or minus—for my plus or minus. I had the power to downvote them, to drive them down the thread, all without revealing my identity. So I did.
Were the comments ill-intentioned or disrespectful? It’s not clear what exactly Facebook means by that. But was it satisfying to push the down arrow on complaints I found stupid and lazy and “bad,” making them -1s? You betcha. The downvote grants the power to dismiss content in a way the vague “reactions” never did—were you angry at the comment or sharing in the writer’s fury? Besides, reacting to a comment, positively or negatively, only made it more prominent. With the downvote, you can make someone’s disagreeable opinion a little less visible—a functional dislike button, even if Facebook will never introduce one.
The difference between the Down Under experiment and the February one is the introduction of the upvote button. But does the “up” arrow hold the same tantalizing appeal? Though the new system is ostensibly about rewarding thoughtful and engaging comments, I didn’t feel inclined to use it. We already have a myriad of ways to “upvote” content on Facebook, be that through liking or loving or laughing—why upvote when you can thumbs-up? Comments that receive thousands of likes will presumably still be given prominence. And while anonymity may be important when downvoting a sexist troll, we’re less likely to require invisibility when showing support for a reasonable commenter. Facebook’s second trial is trying to be a little more positive, but an upvote system seems likely to become mainly a downvote system because that is the side of the equation we are currently lacking: the ability to click in our disapproval.
For Reddit users, up- and downvoting are essential features of the site. (Co-founder Alexis Ohanian Sr. even joked following Facebook’s February experiment that he wished he’d trademarked it.) Yet even Redditors don’t have a unified theory. In a discussion on the subreddit TheoryOfReddit, users say that they will sometimes upvote people who have been downvoted purely out of sympathy or spite, or downvote something popular not because it’s bad but because “it doesn’t deserve to hit the front page.” See also: being tired of a repeated or unfunny joke, sensing passive aggressiveness in a post, believing the user is karma-whoring or attention-seeking, disliking the tone of the user comment (too preachy, serious in a jokey thread—party pooper), feeling a post is abusive/igniting a witch hunt, disliking the user, and thinking a user is stating opinion as fact. If even Redditors can’t agree on an objective usage, what hope do Facebook users have? (For what it’s worth, Reddiquette says, “If you think something contributes to conversation, upvote it. If you think it does not contribute to the subreddit it is posted in or is off-topic in a particular community, downvote it.”)
The environment created by downvoting may even make Facebook’s cultural issues worse. One Stanford University study showed that negative evaluations (downvotes) on social media can have a huge negative impact on the future behavior of downvoted users:
a contributor who is down-voted produces lower quality content in future that is valued even less by others on the network. What’s more, people are more likely to down-vote others after they have been down voted themselves. The result is a vicious spiral of increasingly negative behaviour that is exactly the opposite of the intended effect.
So much for negative reinforcement. Another study, published in Science, found that upvote/downvote systems breed “irrational herding” (this one is less surprising) with one upvote having a huge impact on the likelihood of a comment receiving a second, and snowballing from there.
Facebook clearly has some issues with hate speech and echo chambers, and many also think it has politically biased moderators (though they can’t all agree on which side it biases). Facebook is considering multiple methods to address these concerns, including downvoting and hate speech reporting, a feature it rolled out by accident on May 1. Both these sifters put a lot of onus on (not to mention faith in) users to review content, with Facebook delegating its unenviable task of overseeing billions of accounts. But unlike the “Does this post contain hate speech?” feature, the upvote and downvote system won’t require a Facebook employee—or A.I.—to verify it. It will be moderated by the masses.
We’ve already seen how “the masses” can weaponized public opinion on social media, with 50,000 Russian bots trying—and some would say succeeding—to influence Twitter users in the lead up to the 2016 presidential election (let’s not forget Brexit, too). This feature might make it even easier for organized groups to control what people see, to push an agenda, and to kill unfavorable opinions—anonymously. Humans are just as bad. Facebook groups have become effective bases for interest groups, mobilizing outrage and driving member to bombard posts with negative comments—in a recent case, conservative “supporter” groups and meme pages drove an ABC affiliate page for kids off Facebook with its targeted crusade. Downvoting unfavorable comments en masse doesn’t seem like much of a stretch for these groups. Bots may have stirred things up, but people—combined with partisanship and prejudice—are the problem with Facebook discourse. It’s a downvote from me, Facebook.