Monday afternoon Facebook announced a rare bit of positive news: Everyone who does contract work at Facebook in the U.S. will now earn a wage that’s more reflective of their local costs of living—and those that do the hard and sometimes psychologically costly work of content moderation will be paid a little more.
In real terms, that means that means a run-of-the-mill non-employee contract worker at Facebook will make a minimum of $15 per hour in all U.S. metropolitan areas—with rates as high as $20 in San Francisco, New York, and Washington, D.C. Perhaps more importantly, “Operations” team members—the people who are on the front lines of screening graphic content—will earn between $18 and $22 per hour across the country. They’ll also get new levels of technical support in how they review content and more psychological support if they’re affected by the aftermath. According to Facebook, this includes “onsite trained professionals for individual and group counseling … during all hours of operations” and new programs and tools such as “adding preferences that let reviewers customize how they view certain content,” including being able to temporarily blur graphic images by default before reviewing them.
But what has pushed Facebook toward these big (and costly) changes? After all, humans have been doing content moderation for the site since it began in 2004, and contract labor has been used since 2010.
Some credit undoubtedly has to go to the steady drip of reporting in the last five years on just what the job of content moderation entails and the risks it presents. Adrian Chen seminally wrote about this in a 2014 Wired article titled “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” Since Chen’s piece, there have been similar exposes that look at the real, murky, often brutal world of working as a content moderator: a 2016 story in the Verge by Catherine Buni and Soraya Chemaly; a 2017 article in the Guardian by Olivia Solon; the 2018 documentary The Cleaners; and in February, another piece in the Verge, this one by Casey Newton. Among other things, each documented, some in horrifying detail, the psychological toll that moderators faced with work often entailed daily exposure to sexual and graphic violence and hateful content—beheadings, bestiality, child sexual abuse, disturbing hate, and more.
But perhaps the person who deserves the most credit is someone who doesn’t work in tech journalism: Sarah Roberts, an information studies professor at UCLA who has doggedly covered content moderation and its associated labor and economic issues for almost a decade, and been a constant force agitating for better working conditions for moderators.
In 2010, Roberts was an information science graduate student in Illinois when she came across the New York Times article “Concern for Those Who Screen for Barbarity.” The article, one of the earliest on the topic, describes individuals in Iowa working in a call center and screening content for websites. Roberts couldn’t believe she didn’t know such a job existed. “I had 20 years on the internet as a user and I was a low-level technologist,” she told me recently. “I felt like I was pretty aware of big-picture issues that existed, and in that moment I realized that it never occurred to me how these major corporate entities might be contending with the issues around soliciting content from users.”
Roberts started to ask people with possible expertise in the field. Everyone she mentioned it to had never heard of it, and most replied with, “Don’t computers do that?”
Roberts spent the next several years—all of her Ph.D.—trying to figure who did do that. Along the way, she met with a surprising amount of skepticism. “It was shocking how many people—people with no apparent motivation—would just tell me, ‘There’s no way there’s legions of people doing that job. You’re lying. That’s not true,’ ” she said. “The fact that people doubted that humans were doing this messy job instead of computers was fascinating in itself. [What] exactly is going on in terms of peoples’ aspirational relationship to these platforms where they don’t want [human content moderation] to be the reality?”
While the more high-profile news stories on content moderation labor are powerfully compelling stories that provide an Upton Sinclair–like look into the jobs behind this harsh industry, they often failed to take on more systemic issues like why a market for such work existed at all, or where that market was developing. Roberts’ work quickly moved in these bigger directions. She watched how different types of online platforms sourced different kinds of firms to do their labor, and from where. Some were boutique firms specializing in “soup to nuts” moderation, others aimed to target only “mom and pop”–size platforms. They also differed geographically. Early content moderation for U.S. platforms, for example, was based in the Philippines and India. Roberts theorizes that is because they were formerly dependent on the United States. In contrast, Western European content moderation happened in places in Eastern Europe like Poland. It was pure globalization. “The textile industry is a good analogy,” says Roberts. “Except with content moderation, people are led to believe there’s no material cost.”
But in fact, the costs of content moderation to these companies is enormous. Though they increasingly hope to depend on A.I., they are coming to terms with the fact that right now more human moderators are needed—and the treatment of those moderators is of public concern. Facebook says it employs more than 15,000 people worldwide doing this work on its operations team—and while Monday’s announcement only relates to the United States, Roberts is optimistic.
“When they told me what they were doing,” she says, “I read it and said, ‘This is great. This is just great.’ ”
It is also just the beginning. At least, that’s what Roberts hopes.
Just last month, Google announced better benefits for contract workers, but Facebook’s new hourly wages outstrips it already. “Facebook is really leading something with this—so how are the other firms going to respond to this? Because they’re going to have to move now that the bar is set in a new place.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.