Future Tense

Facebook’s Anti-Conservative Bias Audit Is Here

Its results are entirely unsurprising. So what happens now?

Jon Kyl in front of a Facebook logo.
Former Republican Sen. Jon Kyl led the exhaustive audit. Photo illustration by Slate. Images by Facebook and Photo by Zach Gibson/Getty Images.

The preliminary results of Facebook’s long-awaited “bias” audit are out. The key takeaway? Everyone is still unhappy. The report is little more than a formalized catalog of six categories of grievances aired in Republican-led congressional hearings over the past two years. It doesn’t include any real quantitative assessment of bias. There are no statistics assessing the millions of moderation decisions that Facebook and Instagram make each day. Instead, there are merely some capitulatory minor product tweaks to address edge cases, such as the permitting of images of premature babies to bolster pro-life ads (previously, Facebook had apparently prohibited images of medical tubes connected to a human body).

Advertisement

These tiny changes are all the more remarkable because the audit was an exhaustive affair, the result of about a year of research led by former Republican Sen. Jon Kyl, encompassing interviews with scores of conservative lawmakers and organizations. Facebook committed to the audit in May 2018, amid criticism that it silenced conservative voices.

Advertisement
Advertisement
Advertisement

Despite the time and energy invested, the conspicuous absence of evidence within the audit suggests what many media researchers already knew: Allegations of political bias are political theater. Sen. Ted Cruz has been touting anecdotes about Silicon Valley censorship for more than a year. President Donald Trump has fundraised on it. Recognizing that it plays well, left-leaning politicians have begun to seize on the censorship talking point, too: Sen. Elizabeth Warren got angry about Facebook denying one of her ads (it was later restored), and Rep. Tulsi Gabbard, another presidential candidate, is presently testing the limits of cognitive dissonance by suing Google for censorship while simultaneously touting her debate performance on Google search trends.

Advertisement

Still, the audit findings (or lack of them) may help shift the conversation in a positive direction. While they’re unlikely to put a stop to the belief in political bias, perhaps they will dissuade the Trump administration from pursuing a misguided executive order to “police” social media censorship. That may be too optimistic. But perhaps the findings—and the challenges of even conducting a meaningful audit—could be used to focus the conversation on real problems with social media: an advertising infrastructure masquerading as a communications infrastructure and algorithms that incentivize misinformation.

Fewer than two weeks ago, a draft executive order leaked, detailing a plan by Trump to address “anti-conservative bias” on social media platforms, including Facebook and Twitter. (The leak came shortly after a bizarre “social media summit” held at the White House.) The order, titled “Protecting Americans From Online Censorship,” signaled an effort to take the topic of anti-conservative bias from sound bites to rule-making. If enacted, it could significantly change the long-standing law that has governed American speech online and make major tech platforms liable for perceived censorship. The order would seek to grant the Federal Communications Commission and Federal Trade Commission new power over the content that social media companies currently moderate themselves.

Advertisement
Advertisement

But the volume of content and moderation decisions that those agencies would face is staggering: From July to December 2018, Twitter’s transparency report notes that 11 million unique accounts were reported for rules violations. Of those, Twitter took some sort of action on 235,455 flagged as abuse, 250,806 for hateful conduct, and 56,577 for violent threats. Meanwhile, Facebook moderates millions of posts per week. In a three-month period at the start of 2019, Facebook took action on 2.6 million pieces of content related to harassment and 4 million related to hate speech. Given that volume, human moderators and algorithmic systems inevitably make bad calls. The problem is that viral stories of those individual bad calls spread like wildfire, particularly if they feed the narrative that a distinct group is being censored.

Advertisement
Advertisement
Advertisement

The topic of free speech is always thorny, but the evidence that is supposed to support conservative criticism broadly, and Trump’s draft order specifically, simply doesn’t stand up to scrutiny. First, using the metric that is most transparent—reach—conservative news is top-performing content on Facebook. Attempts to take a quantitative approach to assessing conservative censorship, such as a Quillette article analyzing prominent Twitter accounts that were banned, revealed something else entirely: “Of 22 prominent, politically active individuals who are known to have been suspended since 2005 and who expressed a preference in the 2016 U.S. presidential election, 21 supported Donald Trump,” the author assessed. Among the 21 were conservative paragons Tila Tequila, David Duke, and the American Nazi Party; other conservatives sharply critiqued the attempt to reclassify extremists as conservatives. Even the anecdotes are misleading: One of the most touted is the story of Rep. (now-Sen.) Marsha Blackburn, who alleged conservative censorship when she was prevented from running an ad that made false claims about Planned Parenthood selling baby parts. Twitter initially declined to allow what it deemed a contested and inflammatory claim to be served as an ad but said the Blackburn campaign was free to post and promote it organically on the site.

Advertisement
Advertisement
Advertisement

For many researchers who study social media and moderation, the hope was that the conservative bias audit would provide a large-scale assessment that would resolve the question of systemic political bias. It didn’t happen. And that’s partially because it can’t. Ultimately, this is a system of incredible complexity in which moderators make millions of hard calls. It’s also a system with no transparency or oversight. It’s time for partisans to stop working the refs to achieve better outcomes for their parties and instead work toward bipartisan policies that benefit us all.

We all spend time on a communications infrastructure that’s opaque, controlled by a handful of companies, and built to drive engagement at any cost. Feelings of frustration emerge when moderation rules are applied unfairly and there’s no one to appeal to. These are problems the left and right can find common ground on. Lawmakers and pundits in both parties should push for more transparency into recommendation engines and content takedowns, more accountability when those algorithms promote dangerous content, and a common-sense appeals process when individuals believe they have been unfairly censored.

Advertisement

There are already bright ideas in this vein: The social media industry and civil society could identify a system of shared standards or best practices; companies could then decide whether to adopt those standards, and consumers could sign up for platforms that best suited them. Improved A.I. moderation, coupled with a robust human appeals process and subject to auditing, is also a way forward—and would reduce the harms that many human moderators experience. There’s also the potential to replicate and improve upon systems such as the Lumen database, which makes copyright-related content takedowns visible to outside researchers; this would increase transparency and enable ongoing assessments to ensure there’s no bias.

We all want an internet that facilitates free expression. It’s time to pursue ideas that get us there, rather than political orders masquerading as policy.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement