Future Tense

Facebook Asked Users What Content Was “Good” or “Bad for the World.” Some of the Results Were Shocking.

The Earth flanked by a Facebook logo and a thumbs down icon.
Photo illustration by Slate. Image via titoOnz/iStock/Getty Images Plus.

On Oct. 23, 2020, a number of Facebook employees attended a work presentation that asked a rather complex question—”How much of News Feed is Good (or Bad) for the world?”—according to disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organizations, including Slate.

This slide deck focused on the company’s first “Good for the World” survey, conducted by the Facebook App Research Team and the Connection Integrity team, the latter of which, per a former employee, is “responsible for reducing objectionable content or negative experiences on the platform.” According to the slides, the survey was meant to “capture a wide-range of users’ subjectively-defined bad experiences on Facebook.” How it went down: From mid-August through mid-September 2020, the users who participated—about 143,000 people across dozens of countries—scrolled through their news feeds and were asked on each post they saw, “Is this kind of post good for the world?” Respondents could pick one of five options: very good, somewhat good, neutral, somewhat bad, and very bad. Then Facebook would group these answers into two categories: Good for the World (GFTW), encompassing posts marked “very good” or “somewhat good,” and Bad for the World (BFTW), encompassing posts marked “very bad” or “somewhat bad.”

Advertisement
Advertisement
Advertisement
Advertisement

The researchers said that the study was “meant to be a subjective measure of [users’] own personal opinions and perceptions.” So instead of offering any guidance or definition about what may be considered “good” or “bad,” Facebook left its respondents to their own devices, literally. One team member said the question was worded that way to help “generate training data for personalized demotions models that would reduce unwanted negative experiences in Feed,” with “the reason being that people generally wouldn’t want to see things they felt were bad for the world.” That is, Facebook hoped the results would help artificial intelligence systems learn whether content was likely to be GFTW or BFTW.

Yet the survey found that the most engaged posts, or those with the broadest reach, were more consistently considered BFTW than those with less engagement or a lower reach. While reshares help distribute the most content considered BFTW, original statuses are the type of Facebook post with “the highest prevalence of BFTW.” The topics of posts most often considered BFTW revolved around crime, politics, and “games, puzzles and play.” (The commentary on the slide depicting this data mentions “it may be surprising” that “games” is ranked so highly, a fact that is only surprising if you haven’t paid any attention to the most fraught online controversies of the past decade.) However, as one slide notes, the data also demonstrates that posts around politics and social issues—labeled “civic content” at FB—are more likely to perceived as BFTW even if they don’t have a wide reach. Another slide mentions that “BFTW prevalence was higher among posts predicted [by A.I. systems] to contain: violence, sexual solicitation, hate speech, violence incitement, nudity.” A different slide elaborates that “respondents were more likely to say the posts in their News Feeds were BFTW if they were: men, from older or younger age groups, less tenured, [or] users with very few or very many connections.”

Advertisement
Advertisement
Advertisement

The survey results presented interesting conundrums to the research team. The presentation notes that Facebook users have often complained about not seeing enough posts from their friends—yet original statuses were often thought of as BFTW. On the whole, U.S. Facebookers were three times more likely to consider “civic” posts BFTW than noncivic posts (those unrelated to socio-political issues). By contrast, most other countries surveyed did not have such a wide gulf between perception of civic and noncivic posts—and in a few nations, like Angola, Nepal, and Guatemala, civic posts were less likely to be labeled BFTW than noncivic posts were.

But what about the stuff on FB that’s GFTW, according to the survey respondents? That’s a more complicated question. The study indicates that while posts typically marked BFTW “indicated negative content,” the posts marked as GFTW varied a bit more: from posts about positive developments in the world to stuff users simply—and subjectively—found “humorous.” This could get dark: Two example posts widely marked as GFTW among U.S. users included a QAnon-promoting status as well as a screed claiming that Jacob Blake, an unarmed Black man who was shot by a Kenosha police officer in August 2020, “is completely responsible for what happened” to him. While the researchers claimed that the majority of GFTW posts were “benign or somewhat positive,” they also admitted that some included, for example, antisemitic comments.

Advertisement
Advertisement

So, what did we all learn here? It’s worth noting that this isn’t the first time Facebook and its users have attempted to gauge whether the social network is “good for the world,” and whether what’s GFTW is GFFBL (that’s the acronym I just made up for “good for Facebook’s bottom line”—see, I can do it too). As Guardian reporter Julia Carrie Wong wrote three years ago, a 2012 email by Facebook CEO Mark Zuckerberg using the phrase—albeit in a nonresearch context—was released to the public by the U.K. Parliament, reading in part (emphasis mine):

Advertisement

Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it. However, that may be good for the world but it’s not good for us unless people also share back to Facebook and that content increases the value of our network.

Advertisement

More in line with the slide deck in question, in February 2018, following the Cambridge Analytica scandal, some users reported that a survey question was popping up on their feeds, reading: “We’d like to do better. Please agree or disagree with the following statement: Facebook is good for the world.” There were five options to choose from: strongly agree, agree, neither agree nor disagree, disagree, strongly disagree. This occurred shortly after Zuckerberg told the New York Times that he wanted his kids to “feel like what their father built was good for the world.”

Advertisement

Then, of course, the question was used again in the 2020 survey from the slide deck. The first public mention of the mass BFTW-GFTW survey (without those acronyms) was reported in the New York Times in November 2020; the paper also noted the survey spurred employees to train a machine learning algorithm to demote the presence of BFTW posts. According to the Times, the tool was actually successful at demoting the spread of BFTW content—so much so, in fact, that it reduced the amount of times users opened Facebook, which concerned executives like Zuckerberg. So the algorithm was tweaked to demote bad content at a less vigorous pace, until this act no longer also reduced Facebook’s unique traffic. Turns out, what may have been BFTW was actually GFFBL.

Advertisement
Advertisement
Advertisement
Advertisement

The release of the Facebook Papers gives us more insight into the specific findings and adds new context to that decision. According to a July New York Times report, Zuckerberg continues to rely on GFTW metrics (as well as another one, CAU, which stands for “cares about users”) to inform his decisions regarding Facebook content moderation, even though his own Connection Integrity employees have said the classification was flawed. (A key bullet point from the slides: “A GFTW survey metric would provide limited insight into ecosystem health.”) There is also the fact that though Facebook’s engineers and data scientists realized that BFTW content was chock-full of violence and hate speech, top executives seemingly elected not to police such content more intensely, lest it keep users from logging in regularly. One could read it as implicit acknowledgment by Facebook’s leaders of a longtime truism among Big Tech critics—that social media thrives on and incentivizes outrage.

What about users who weren’t surveyed directly by Facebook? A Morning Consult poll released in October showed that Facebook’s net approval within the U.S. is currently lower than it’s ever been, and that other countries’ approval ratings of Facebook are also sinking. Perhaps a larger portion of Facebook’s nearly 3 billion users consider it to be bad for the world after all, with more joining the anti-Facebook bandwagon in the year since the BFTW survey was taken.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement