The Industry

Social Media Companies Aren’t Liberal or Conservative

They’re capitalist. And their real biases are against labor costs and controversy.

Mark Zuckerberg and Jack Dorsey.
Photo illustration by Slate. Photos by Bertrand Guay/AFP/Getty Images and Michael Cohen/Getty Images for the New York Times.

President Donald Trump complained over the weekend that social media companies are “totally discriminating against Republican/conservative voices.” Never mind that Trump rode his strident social-media voice all the way to the presidency and has been explicitly exempted by Twitter from the rules of conduct that apply to others. His concern now seems to be widely shared among the U.S. political right, whose leaders can reel off a litany of instances in which they believe conservative views have been unfairly “filtered” or “censored” on those platforms.

Trump’s latest gripes came after Twitter temporarily suspended far-right firebrand Alex Jones. Which made it easy to forget that Twitter had spent the previous week taking a beating from the political left for declining to crack down on Jones even after Facebook and other social media companies banned him. Indeed, over the past year, the company’s alleged tolerance of literal Nazis has been such a common complaint from left-leaning users that it has become a meme of sorts.

So when Twitter CEO Jack Dorsey acknowledged in a CNN interview on Sunday that his company’s bias is “more left-leaning,” some liberals were incredulous. (In a quick Twitter poll of my generally left-leaning followers, 91 percent said they view Twitter as either “neutral” or “biased against liberals.”) If Dorsey and his team lean left, they wondered, why does Twitter seem so sensitive to criticism from the right? Are they overcompensating?

Perhaps, at times, they are. And insiders have some theories about why that might be. But what much of the debate over Twitter’s alleged political bias obscures is that social media companies’ content and behavior rules aren’t driven by any political ideology, no matter what bumper stickers adorn the Priuses and Teslas in their parking lots. Rather, they’re driven by a desire to keep content flowing, labor costs down, and controversy to a minimum. Or, to put it more bluntly: They’re driven by the profit motive.

For internet companies in particular, the profit motive militates against putting a political or editorial stamp on their platforms. When social media firms do risk partisan anger to move the goal posts, they do it reluctantly, to defuse a PR crisis that could threaten their bottom line.

It’s tempting to imagine the CEO and his or her key deputies holed up in a room somewhere, deciding whom to permaban, whom to shadow-ban, and whom to tolerate. Hence the rash of complaint tweets that tag @jack, or the profanity-laced rants against Facebook CEO Mark Zuckerberg, when their companies make a controversial call. The actor Seth Rogen spoke out last month against Dorsey’s “bizarre need to verify white supremacists.” Republican Sen. Ted Cruz has accused Facebook of “censoring legal, protected speech for political reasons.”

But the reality of how this plays out is far more mundane—and, in many ways, less satisfying. Just how removed these decisions are from the political biases of social media firms’ C-suites was vividly illustrated in the latest installment of Radiolab.

The episode traces the evolution of Facebook’s content moderation policies from its early days, when prohibitions on images of nudity and gore were among the only hard-and-fast rules. Moderation was not about filtering ideas, but a form of customer service: The company took down posts only in response to users’ complaints. Decisions on where to draw the line fell to the judgment of a small cadre of Facebook employees, and a 2008 protest by a few dozen breastfeeding moms outside the company’s Palo Alto headquarters won an exception to the nudity rule.

But as the social network grew to take in millions, then billions, of users, the work of keeping it clean because staggering, and the company delegated it to poorly paid and loosely affiliated teams of contractors around the world. One contractor told Radiolab he was asked to make some 5,000 content decisions in the course of a typical eight-hour day. That means he had a few seconds to review each piece of content.

To keep a semblance of consistency under these conditions, Facebook replaced nuanced human judgments with a series of highly specific rules—basically a huge decision tree—covering every kind of content imaginable. Since the application of those rules frequently runs afoul of common sense, let alone justice and fairness, Facebook is constantly updating them. Even Monika Bickert, the company’s head of product policy, acknowledged to Radiolab that “no matter where we draw this line, there are always going to be outcomes we don’t like.”

For instance, Facebook evolved over time a patchwork of hate speech rules that placed different levels of protection on different types of groups, without regard to historical context. That’s how the company ended up with a rulebook that, perversely, allowed posts disparaging “black children” but banned posts disparaging “white men.” Facebook’s explanation: Because it operates in so many different countries and cultural contexts, it’s simpler to ignore historical privilege altogether.

That doesn’t necessarily mean Mark Zuckerberg is biased toward white men, or against black children (though it might mean that his company failed to consider the disparate impacts of its policies on more vulnerable groups). It’s doubtful that Zuckerberg was even aware of those specific guidelines. What it does tell us is that Facebook cares more about scalability—that is, the ability to serve billions of users with a minimum of human labor—than it does about making the right call in every case. It would rather get lots of things wrong, but do it with a veneer of consistency and neutrality, than entertain nuance or exercise human judgment. (It would also, in the long run, prefer to do it via machine-learning software, to keep down the human headcount.)

Why? For one thing, exercising careful human judgment in every decision would require Facebook to employ many times more people than it does today. Decisions would take minutes, even hours or days, rather than seconds. Think of the size of the U.S. legal apparatus—then apply it worldwide. Taking responsibility for every such decision would make Facebook’s cost structure much more like that of the slow-moving, traditional media companies whose business it so profitably disrupted.

Besides, it’s not as though taking responsibility for everything they publish has endeared media companies to politicians or the general public. A 2018 Knight-Gallup survey found that trust in the media has sunk to an all-time low. As if the social networks needed more reason to hide behind seemingly objective rulebooks and policy manuals, conservative leaders such as Cruz have begun to threaten attacks on the companies’ legal protections in retribution for politically tinged moderation decisions.

So Zuckerberg, Dorsey, and YouTube CEO Susan Wojcicki have, until recently at least, distanced themselves as far as possible from decisions about whom to ban and who can stay, what speech to tolerate and what’s beyond the pale. They might all have their political biases, but those have strikingly little to do with how their platforms are moderated.

The exceptions arise when a particular decision arouses so much ire that the company’s leaders feel compelled to address it. When that happens—when a dispute over breastfeeding photos, or a video of a beheading, or the “napalm girl” photo, or Alex Jones rises to a certain level—the platforms face a dilemma. The goal of maintaining a consistent, one-size-fits-all rulebook comes into conflict with the goal of avoiding political controversy. In such cases, the companies have to weigh the political and reputational costs of each.

Even then, they tend to look to their existing rulebooks to bail themselves out. When both Facebook and Twitter explained that they required users and the media flagging specific posts in order for them to reach their Jones rulings, few bought the excuse. How could Zuckerberg and Dorsey not know that Jones had been flouting their rules? But there’s reason to suspect it’s true.

Remember, reviewing Jones’ posts on a daily basis is nowhere in the job description of any company leader. It’s the job of faceless contractors making 5,000 decisions a day. So it makes sense that a critical mass of angry users could flag violations that had previously gone overlooked. And it makes sense that Zuckerberg and Dorsey would prefer to wait for that process to play out, thus preserving the façade of objectivity, than to ban him by fiat.

Donald Trump, Ted Cruz, and other Republicans probably won’t buy Dorsey’s claim that he tries to keep his biases out of the company’s decision-making, particularly the next time an Alex Jones gets the boot. Nor will most liberals believe that he isn’t bending over backward to appease the hard right, especially the next time an Alex Jones isn’t ejected from the platform. When a company that shapes the flow of online political speech is making high-stakes decisions about who can talk and who can’t, it’s hard to accept that those decisions are the product of a jury-rigged rulebook or algorithm rather than political calculations or a secret agenda.

But it’s worth remembering, with these controversies, that social media companies do have an agenda, and it isn’t secret. Their agenda is to keep making money, and when it comes to high-stakes decisions about who can say what online, the most lucrative option is often to play dumb.