The Facebook Oversight Board has spoken: Trump will not be reinstated on Facebook, at least for another six months, until Facebook clarifies its rules and penalties, particularly as applied to public figures. In its long and thoughtful ruling the board cites international human rights law in detail. Most notably, it has declined to provide blanket political cover and legitimacy to Mark Zuckerberg’s Jan. 7 decision to deplatform the former president. Instead it has instructed the company to be more transparent and less arbitrary about how its rules are enforced, while also seeking—to the extent that its limited remit allows—to address the real harm that Donald Trump’s social media postings have caused.
This unprecedented deliberative body of experts represents an innovative experiment in improving public legitimacy and understanding around how the world’s most powerful global internet platform enforces its content rules. It’s the first institutional effort to apply a consistent, universal set of global standards for free speech and human rights to the content moderation practices of a social media company. The board’s application of international human rights law in all of its decisions, including the Trump case, sets important precedent not only for Facebook but for other digital platforms seeking to craft and enforce rules governing users’ speech and behavior while also respecting and protecting users’ rights.
The Oversight Board, established in 2020 as an independent organization and funded by Facebook, started with 20 members but will eventually expand to 40. Its decisions about whether content or individual accounts should be allowed to remain on the platform, or be taken down, are binding. It may issue other opinions and recommendations, but those are not binding. Every day, Facebook takes down millions of postings that violate its rules, but the board considers only a tiny number of representative cases. They are chosen with the help of staff from among user appeals and requests for review from Facebook itself—which is how Trump’s case landed on the docket in January. Including the Trump case, it has issued a total of 10 decisions accompanied by detailed explanations about the process and information that led to each decision. As in all of its previous cases, Wednesday’s decision describes all of the information that the board considered and how that information informed their decision. It also describes the decision-making process, including some details about debates and disagreements among its members. The Facebook Oversight Board has done an admirable job within the confines of its remit, using the limited power that Facebook has seen fit to grant it. But Wednesday’s ruling only further exposes the need for real accountability.
First, we must be clear about what this organization is not. Calling this group of distinguished people the Facebook Oversight Board is a misnomer. A more correct name would be the “Facebook Content Moderation Adjudication Board” or “Facebook Content Governance Oversight Board.” Some people like to call it Facebook’s “Supreme Court” due to the nature of its rulings, which the company treats as precedent. Perhaps the “Jedi Council of Facebookistan” would be a more faithful representation of what this really is—and isn’t. In Wednesday’s decision, the self-aware board defines its own authority as “an independent grievance mechanism to address disputes in a transparent and principled manner.”
The point is this: The board members make no claim to oversee how Facebook Inc. operates as a company. They have no power to impose penalties on company executives for actions that are in good faith assumed to be mistakes—but which in some situations can have massive and sometimes irreversible impact on lives and livelihoods.
The board also cannot compel Facebook to provide information. Facebook executives are free to decline requests for information that the board deems necessary to understand the context of their cases. In the Trump case, Facebook flatly declined to answer seven of the 46 questions asked by the board, many of them related to the company’s targeted advertising business model, algorithmic amplification system, and communications between company staff and political officeholders. The board’s lack of real power of discovery first hit a moment of reckoning in April, when the board considered a case in which Facebook had removed of a post accusing India’s prime minister, Narendra Modi, of encouraging Hindu nationalists to promote the killing of Sikhs, a religious minority. The post was written and shared as Sikh farmers were protesting on the streets of New Delhi against controversial new agriculture laws supported by Modi’s government. Violent attacks targeting Sikhs, and the government’s failure to discourage them, were a serious concern.
The board found that the deletion of the post in question was not only inconsistent with the company’s rules but violated the free speech rights of an embattled religious minority. Yet Facebook executives declined to answer specific questions from the board about government requests to restrict protest-related content around that time. The company claimed that such information was not essential for decision-making in this case and referenced a list of other possible legal obstacles. While it did answer a question about how the company works to ensure staff independence from government interference, the board might have good reason to wonder how effective those measures are. In October, Facebook’s India public policy head, Ankhi Das, departed in the wake of allegations that she had told staff not to enforce hate speech rules against politicians from Modi’s ruling Bharatiya Janata Party. Be that as it may, Facebook has made clear that probes of staff ethics are beyond the remit of the so-called Oversight Board.
Nor is Facebook obligated to implement recommendations that the board makes about the company’s broader policies and business practices. Thus the board has limited power to address the design features that help to maximize or exacerbate the harms caused by certain users, or to amplify and spread specific types of posts. In earlier cases, the board’s recommendations for greater transparency or revisions to the rules have been taken on board in some cases and not others—and to varying degrees. For example: In a ruling to reinstate content related to breast cancer symptoms and nudity, the board recommended that Facebook “inform users when automation is used to take enforcement action against their content.” The company responded that it would study the potential impact on users of such messages and “continue experimentation to understand how we can more clearly explain our systems to people.” In the case of a ruling on COVID-19 misinformation, in which Facebook complied with the instruction to reinstate content, the company disagreed with a nonbinding recommendation to modify the company’s approach to removing posts about alternative treatments not proven to pose immanent harm.
Still, those disagreements haven’t stopped the board from pushing. In the Trump decision, it calls on the company to “develop effective limitations” on how key features of its business model amplify “speech that poses risks of imminent violence, discrimination, or other lawless action.” Facebook has not yet indicated whether it will implement this and other recommendations. Documents leaked in 2020 revealed that Facebook is well aware of the harms its algorithmic systems can cause or contribute to.
The question is whether Facebook is capable of admitting publicly that its targeted advertising and algorithmic recommendation systems helped amplify Trump’s de facto call to arms and enabled pro-Trump advertisers to target people most likely to act on Trump’s statements with content that reinforced lies that helped to justify violence, using profiles Facebook compiles from vast quantities of data about users’ activities and characteristics. For this very reason, Jameel Jaffer and Katy Glenn Bass of Columbia’s Knight First Amendment Institute recently argued it is premature for the board to rule on Trump’s Facebook account without having first commissioned an independent study of how Facebook’s design may have contributed to the Jan. 6 attack on Congress. In its ruling, the board has signaled it agrees that a permanent decision is indeed premature, although primarily for other reasons related to the company’s lack of transparency, clarity, and consistency about how its rules are enforced. Yet it also called for Facebook to conduct “a comprehensive review of its potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6, 2021.”
While the Oversight Board has no power to require such a review, we must not forget about Facebook’s real governing board, its board of directors, which has thus far done little to hold management accountable for the company’s impact on society. In late May, at Facebook’s annual shareholder meeting, investors will vote on a proposal calling for CEO Mark Zuckerberg to relinquish his seat as chair of the board to an independent chair. The proposal points out that separating the role of CEO and board chair is a basic feature of good corporate governance. Its authors argue that “the lack of an independent board Chair and oversight has contributed to a pattern of governance failings, including Facebook missing or mishandling a myriad of severe controversies, increasing risk exposure and costs to shareholders.”
Facebook management opposes that proposal, and thanks to Mark Zuckerberg’s absolute power over the outcome, he has no need to worry that it will ever pass. In 2020 and 2019, nearly identical proposals received a solid majority of the votes from independent shareholders who hold “class A” shares (one share, one vote). But Zuckerberg and other members of his inner circle hold “class B” shares weighted at 10 votes per share, enabling them to outweigh independent shareholders, and making it nearly impossible for shareholders to hold management accountable for social harms that the platform facilitates.
Class A shareholders have also repeatedly filed another proposal calling for the board to appoint a recognized expert in human and civil rights, with even less success. Current SEC rules, backed by U.S. law, empower Big Tech founders to control their real governing boards. Shareholders cannot prevent Zuckerberg from filling his board with people who will not try to challenge him too much, regardless of ordinary shareholder concerns. In 2019, Robert Jackson, then an SEC commissioner, warned of the consequences if CEOs cannot be fired, making them effectively monarchs of private kingdoms. The SEC, he proposed, should require companies to phase out dual-class shares so that shareholders can actually be in a position to hold management accountable for failing to identify and mitigate social and environmental risks.
The Trump administration’s SEC ignored Jackson. Congress, whose myriad concerns about the power of Big Tech have been made abundantly clear through high-profile hearings, reports, and proposed legislation, could compel the SEC under Biden’s new chief Gary Gensler to prioritize dual-class share reform. Such changes, combined with other transparency and disclosure requirements would help empower shareholders while helping to strengthen corporate governance. Ready or not, Facebook might then have to experience real oversight by its actual governing board.
As for the targeted advertising systems and algorithms that spread and amplify harmful content—thereby helping to maximize the harm that content can do—there is no substitute for government oversight and law. If they are empowered to do so, investors have a role to play in threatening to fire boards and CEOs who fail to address the business risks caused by widely perceived social harms. But ultimately if the business model that generates breathtaking returns is causing persistent and widespread harm that the company is failing to mitigate, it needs to be regulated. Many other industries’ practices and business models are regulated for the same reason. Tech companies need to be held accountable for failing to mitigate harms caused by targeted advertising and algorithmic amplification for the same reason that the law holds companies in a range of sectors accountable for labor practices and environmental, health, and safety risks.
All of this is beyond the reach of the Oversight Board. Unfortunately, lawmakers have thus far failed to do their jobs and use their real power to protect the public interest. We need regulation, based on data and research about technology’s impact on society, that protects the human rights and civil liberties of social media users.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.