Hate speech, misinformation, harassment, terrorism, sexual exploitation: The public’s demand that social media do more about these and other daunting problems is growing. Many feel that social media are failing us, as users and as a society. Some argue that the problems are persistent because the platforms aren’t truly interested in solving them, or that they are too immense in scope, across nations and cultures and languages, to be tamed.
But the problem is, I think, even more fundamental. The very premise of content moderation, as it is performed by every major social media platform, is fundamentally flawed—because it is done by the platforms on our behalf.
We are reaching the limits of this “on our behalf” approach. Social media platforms cannot know my values, or how they differ from yours, or from someone halfway across the planet. Discerning what is and is not cruelty, or obscenity, or fraud, cannot be left to a single entity. No matter how well they do it, platforms can never fully separate the job of overseeing the platform from the need to profit from it. Even if they could, they can never convince skeptics that the two aren’t hopelessly entangled. Most of all, there are some decisions that belong to the community affected by them—that must be contested in public, by the public.
Facebook may finally be reaching the same conclusion.
In a post Thursday, titled “A Blueprint for Content Governance and Enforcement,” Mark Zuckerberg tried on some humility, acknowledging that his platform is regularly exploited by those who circulate propaganda and hate, and that its massive team of content moderators frequently makes the wrong decisions about what to leave up and what to take down. Then he proposed something new:
In the next year, we’re planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding. … First, it will prevent the concentration of too much decision-making within our teams. Second, it will create accountability and oversight. Third, it will provide assurance that these decisions are made in the best interests of our community and not for commercial reasons.
Not much explanation follows. As some have noted, this isn’t even the first time Zuckerberg and Facebook have floated this idea, and Thursday’s announcement only proposes this for next year, with nearly every detail still to be worked out. It’s hard not to be a little skeptical of the timing, given that Facebook spent Thursday reeling from Wednesday’s revelations in the New York Times about the company’s internal turmoil over misinformation and its decision to hire an opposition PR team to aggressively fend off criticism. The story shows no signs of slowing as more details emerge about the PR tactics it undertook, questions persist about what executives knew and when, and reports leak about tanking morale at Facebook.
Regardless of the timing, I want to take the proposal seriously, at least as a possibility. Because if it is handled well, this could represent an important new model for content moderation, providing a true counterbalance to the sometimes hesitant, often techno-utopian mindset at Facebook and other major platforms. And handled poorly, it could be little more than window dressing meant to distract from Facebook’s glaring problems—or it could erode users’ faith in these platforms even further.
When members of Facebook’s content policy team recently asked my advice on this policy, I said as much, and I raised a few questions about how this oversight might work. How Facebook ends up answering three core challenges will matter determine whether this works, or whether it will only make problems worse.
1) Who will take this on? Facebook should be savvy enough not to appoint a gaggle of young, white tech bros. But it would be all too easy to populate the council in narrow and familiar ways. Already, some “elders” are emerging from the tech companies to occupy that trusted adviser role, in informal spaces like conferences and advisory boards, and it would be easy for Facebook to tap them for this. But how different is their perspective? Similarly, the internet engineering community has anointed its own “greybeards,” often looked to for advice on the future of the internet. There’s also a slightly broader group of Silicon Valley “digerati” who circulate from platform to platform, from tech company to venture capital firm to think tank, and already play an advisory role through informal advice, conference discussions, TED Talks, and the like. Drawing from these pools would do little to expand what this oversight could offer.
So, who then? Facebook might be tempted to look to other such entities as a model. Perhaps something akin to the Supreme Court, with lawyers specializing in these issues? But while Facebook should seek out independent, accomplished, and judicious minds, they should not let that produce an older, white, American, culturally elite panel. And there are many kinds of expertise that could be brought to bear beside the law.
Facebook may be thinking of this oversight body as a kind of proxy for the users. In Thursday’s post, Zuckerberg says, “Just as our board of directors is accountable to our shareholders, this body would be focused only on our community.” Zuckerberg has overused the word community for so long as to have rendered it nearly meaningless: It’s absurd to call 2.2 billion people a “community” and pretend that it means anything deeper than “people who use Facebook.” It is unclear the council being “focused” on that community means: Concerned for? Representative of? Beholden to?
Facebook is global, so members of this council should be, too. The impact of Facebook’s choices is felt differently in places like Brazil, or Myanmar, or India, or Germany, than in the United States. The concerns and expertise of these places should be represented. The Facebook user base has more women than men, so the council should, too. And activists, journalists, parents, performers, and community leaders—who all use Facebook in different ways, and thus bring different forms of expertise to bear—ought to be incorporated.
2) How will it work, and when will it intervene? Facebook, to its credit, has improved its response times over the years: Flagged content is now reviewed within 24 hours, and more and more content is proactively identified for review by detection software. Those who might want to appeal a decision they felt unfair would want a similarly rapid response, so their post could be reinstated when it is still relevant. What kind of responsiveness could they promise?
When Facebook enlisted fact-checking organizations to help identify misinformation and conspiracy, it handed a small number of people an enormous task. It was no surprise they could not have the instant and broad impact Facebook promised. An independent oversight council will face similar challenges, and its presence should not let Facebook off the hook. Facebook should be able to handle the mundane and obvious violations, and even the mundane and obvious appeals requests. This council should be reserved for the disputes that represent the kind of decisions that should be publicly contested because the answer is not clear, and because the process of deliberating them independently and publicly has its own societal value.
That would include challenging images such as the “napalm girl” photo from the Vietnam War, beheading videos produced by ISIS, and speakers like Alex Jones. Those kinds of test cases allow the public to grapple with the lines that are being drawn and might give those lines legitimacy in the process. Facebook might even find that it wants to hand some of the hardest cases off so they can be deliberated in public hands.
3) With what authority? Zuckerberg promises that the decisions of this oversight council will be binding. This is the most intriguing possibility here: Facebook and the other social media platforms have largely wanted to retain the right to make these decisions on their own terms. Making the decisions of this group binding would be a powerful change. I could easily imagine Facebook backtracking on this, inserting caveats meant to allow it to overrule decisions it doesn’t like. It’s not easy for “the king to tie his own hands.” In addition to being binding in the specific instance, would decisions made by this council set precedent for Facebook moving forward? In the pursuit of true oversight, Facebook should be bold by daring to make the council’s decisions both binding and precedent-setting. That would allow it not just to reshape individual decisions, but also to serve as independent input for Facebook’s policies and processes themselves.
Making the decisions binding would also enhance the public legitimacy of this council. There is a risk that it will appear to be Facebook merely overseeing itself. One way to address this would be for Facebook to partner with an independent organization, one with its own legitimacy as a fair and respected actor in this domain. If that worked, the availability and authority of this council might eventually be offered to other platforms that also could benefit from similar oversight. Independent oversight, if it is implemented thoughtfully, could represent a significant shift not just in how Facebook moderates, but in the underlying logic of moderation itself.
This doesn’t have to be a perfect arrangement straight out of the gate, and we shouldn’t expect it to be. If what we really want is government oversight, this isn’t it—and that could still happen. If what we really want is not a council but a council of councils, with every nation having its own group to weigh its own values, this is not it either—though it could grow to be over time. But even if this could just be a slight shift in gravity, a change in kind that’s better than the model we currently have, it is worth nurturing. Imagine if Twitter’s Trust and Safety Council, an advisory board that has been in place for years, were given independent and binding oversight on hard cases. Perfect? No. Better than the current system? Almost definitely.
Of course, all this will likely be overshadowed by Facebook’s latest scandal. Perhaps the council’s debut in 2019, if it happens, will be overshadowed by the next scandal. We can’t have a proper discussion about what independent oversight looks like, today, without it being wrapped in the revelations about Facebook’s internal turmoil and external tactics. Unfortunately, this is not mere coincidence or bad timing: The very idea of independent oversight is meant to bolster our trust in the platform, and it will only work if we trust the platform to implement it carefully. These days, trust in Facebook is in short supply.