Future Tense

Digital Platforms Need Poison Cabinets

How a centuries-old German archival approach might make content moderation more effective and accountable.

A drawer filled with a bottle labeled "hate speech," another bottle with a skull and crossbones on it, and a file labeled disinformation.
Illustration by Natalie Matthews-Ramo

Foundational documents of liberal democracy, from the U.S. Constitution’s First Amendment to the Universal Declaration of Human Rights, establish strong protections for free speech. Even so, the rise of digital platforms—and social media in particular—has added new wrinkles to our thinking on what types of expression should be able to circulate widely with ease. Urged on by well-substantiated claims of real-world harms arising from disinformation, harassment, and other forms of problematic content, policymakers and members of the public have placed pressure on online platforms to limit the availability of certain speech. It would be legally impossible for U.S. regulators to take such action themselves for much of that speech. Platforms, however, have much more power to shape the communities they cultivate and the information they promote. For instance, the United States government generally can’t silence those who spread general COVID-19 vaccine misinformation, even as the Biden administration claims it is killing people. Twitter, Facebook, and Google, however, can—and do—unilaterally move to contain or eliminate such speech and impose consequences on those responsible for it.

Advertisement
Advertisement
Advertisement
Advertisement

This is real power. Accordingly, as platforms have adopted a content governance role in earnest, they have faced calls for greater transparency around how they promote, demote, remove, or otherwise influence the circulation of speech, whether it be spam, a statement of solidarity with an online social movement, hate speech riddled with racial slurs, a viral meme, an album of wedding photos, pornography, or Russian disinformation. Such calls have come not just from the platforms’ staunch critics but also from a growing community of researchers focused on analyzing (and developing strategies to mitigate) harmful content online. The researchers argue that insufficient access to detailed data on platforms’ content moderation activity limits the scope and usefulness of their work. These concerns were vindicated in early August when Facebook shut down accounts belonging to NYU disinformation researchers, prompting an outcry from academic and philanthropic leaders. AlgorithmWatch, a German research and advocacy organization, soon after announced that it had shuttered a research project aimed at better understanding Instagram’s algorithms under pressure from Facebook.

Advertisement

But determining how best to provide transparency and research access has proved tricky. Transparency reports and other summary-level accounts of how platforms moderate are welcome but aren’t enough. Highly aggregated overviews assembled by the very companies they purport to keep accountable are limited in their ability to foster public trust. It’s harder to get more granular, though, including because of problems of scale. Facebook says that from April to June, it took action against 31.5 million instances of hate speech from its platform globally and used A.I., rather than depending on user reporting or human moderators, to detect 97.6 percent of it. It’s not only the sheer number of decisions made that makes record-keeping difficult. Data on these decisions would also be tough to handle by definition. Because much of the content is posted by real people, a platform’s detailed publicization of its elimination of specific posts, comments, and photos raises genuine privacy concerns. This is especially true for content that was originally limited to private audiences, such as in many Facebook Groups.

Advertisement
Advertisement
Advertisement

And, by reposting content deemed inappropriate for a platform in a public release, that platform courts the Streisand effect, where attempts to suppress information only fuel its virality.  Consider the example of sharing COVID-19 misinformation. We probably wouldn’t want platforms to advertise a public list of individuals who spread misinformation within a rapidly changing information landscape, knowingly or unwittingly, oftentimes only to their friends. A public archive of actioned-against posts could also create an asset for turning dangerous conspiracy theories into martyred ones. In 2020, Facebook decided to take down posts suggesting that one could drink bleach to prevent infection from the coronavirus. Those posts were taken down to keep people from drinking a poisonous substance, even as some chose to see the actions as conspiracies to quell a movement or to hide hidden cures, no matter how many thousands of user posts suggest the contrary to “theorists.”

Advertisement

Faced with this quandary, many platforms have rationally opted to avoid retaining and sharing much of the content moderation data they generate. There is no standard practice of maintaining comprehensive archives of a platform’s moderation activity, even as that activity shapes platform discourse at a fundamental level.

That’s a real problem. The ways in which speech is produced and filtered on a societywide level is going undocumented. Without access to granular data on how content moderation works in practice, today’s researchers—and the public they serve—are impaired in their ability to understand, debate, and advance the state of content moderation. Indeed, platforms have taken active measures to prevent researchers from scraping data—leaving tomorrow’s researchers with an impoverished historical record through which to comb. Archivists, librarians, philosophers, and others have long grappled with the problem of appropriately handling content that could putatively cause real harm if widely circulated but is too valuable or significant to be destroyed. The most popular depiction of that struggle in contemporary fiction may be J.K. Rowling’s inclusion of a “Restricted Section” of the Hogwarts library, stocked with powerful but dangerous texts, as a plot point in several of the Harry Potter books.

Advertisement
Advertisement
Advertisement

In a recent paper, two of us considered what lessons today’s platforms might learn from the history of the Giftschrank (“poison cabinet”), a German archival institution with origins in 16th-century Bavaria. Information control systems ostensibly designed for the protection of knowledge-seekers, Giftschrank held texts deemed to be corruptive under lock and key. Their history is long and varied—over more than four centuries, Giftschrank have been home to everything from heretical polemics to pornography, to copies of Mein Kampf and other writings of the Third Reich.

Importantly, the function of a Giftschrank is not to render its contents forever inaccessible—throughout history, bonfires and paper shredders have served that function far more effectively. Rather, the idea behind a Giftschrank is to limit access to “poisonous” materials to those who demonstrate a need to review them for legitimate purposes.

Advertisement

The Giftschrank is both an instrument of preservation and one of control—a means of protecting “powerful knowledge,” but also of determining who gets, or doesn’t get, access to it. In some instances, this control can be deployed to socially productive ends. In the wake of World War II, Giftschränke were used to make hateful writings from the Third Reich available to scholars of genocide and cultural memory without undermining Germany’s program of denazification. But control over access can also be used to reinforce existing power structures, including repressive ones. In East Germany, Giftschrank were eagerly deployed to limit and shape academic and political speech. Controlling information flow awards those with power the ability to cover up scandals and shape narratives, or to choose those who will be able to do so.

Advertisement
Advertisement

In many ways, the idea of a top-down system for controlling access to information and expressive material is out of step with the norms of liberal democracy. Still, the broader idea that restricting access to potentially dangerous content can sometimes be the only workable alternative to destroying it resonates today—including in platforms’ efforts to balance societal interests like the preservation of knowledge with their own corporate ones.

Advertisement

It’s understandable that platforms think that exhaustively retaining content moderation data might be prohibitively risky. Such data is sensitive and messy, involving the identities of real people, and speech that many might find repulsive or even legally actionable. But platforms can benefit from developing virtual Giftschrank of their own. They could, as part of the content moderation pipeline, build comprehensive archives of information corresponding to every moderation action they take. These archives could include data on the underlying content, the action taken, the reasons for removal, and other relevant attributes. And platforms needn’t go it alone. Indeed, the most effective platform Giftschrank model might be one in which many platforms adopt a shared archival standard, or even a shared archive. Archivists, librarians, and other noncorporate experts in information management stand to play an essential role in helping platforms to develop best practices around the implementation of Giftschrank, including by setting standards for researcher access to Giftschrank data.

Advertisement
Advertisement
Advertisement

Despite their inherent shortcomings, platform Giftschrank would be a marked improvement over what we currently have. It’s better to have private libraries than no books, secret knowledge than lost knowledge. As complex and inherently fraught with questions of power, control, and ownership as such an approach would be, it would ultimately make possible new opportunities for transparency and academic discovery. And the active involvement of librarians and others who owe duties to the public rather than any one corporate entity could go a long way toward mitigating some of these concerns.

Of course, platforms and their noncorporate partners don’t need to jump right into the deep end. Platforms could start by building a Giftschrank around one relatively narrow but important area of content moderation—like efforts to mitigate disinformation in the context of a particular election—rather than seeking to cover every form of moderation action right off the bat. They could also choose to “quarantine” archives for some period of time prior to releasing them. For example, a platform building a Giftschrank to track disinformation related to an election could hold off on releasing any data to researchers until the election has been certified to avoid fears of politicization or recirculation of removed content. Particularly hesitant platforms could even opt not to release the data until some point in the not-so-near future, depriving contemporary researchers of its benefits but buttressing the historical record.

Advertisement
Advertisement

Adopting a Giftschrank would undoubtedly impose some degree of risk on a platform. Novel legal issues may arise, privacy protection at scale is an imperfect science, and first movers may face the poking, prodding, and criticism that new transparency measures invite. But the potential upside makes these risk factors worth contending with. Indeed, as legislators and regulators contemplate new laws aimed at shaping platform practices, they might weigh the merits of including “safe harbor” provisions—or even outright mandates—aimed at making the adoption of new archival approaches more tractable.

A platform that lets researchers in—and gives them access to granular data—will have access to much better feedback and input than its closed-off peers, benefiting from the ingenuity and insight of a rapidly growing field. More importantly, allowing independent outside review of key content moderation data is absolutely necessary for public trust in online platforms. Today’s transparency reports, press conferences, and statements of policy, many of which more or less require the reader to take a platform at its word, simply aren’t up to that task.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement