Future Tense

The Future of Free Speech Online May Depend on This Database

A gloved hand dropping tweets into a trash bin.
Photo illustration by Slate. Photos by Getty Images Plus.

Last October, a neo-Nazi livestreamed his attack on a synagogue in Halle, Germany. The video of the shooting, which killed two people, stayed on Twitch for more than an hour before it was removed. That’s long enough for a recording to go viral—but it never did. While users downloaded it and passed it around less moderated platforms, such as Telegram, the recording was stopped in its tracks on the major platforms: Facebook, Twitter, YouTube. The reason, Vice reported, is that Twitch was quick to share digital fingerprints, or hashes, of the video and its copies with these platforms and others. All Twitch had to do was upload the hashes to the database of the Global Internet Forum to Counter Terrorism, or, as it’s been called, “the most underrated project in the future of free speech.”

Advertisement
Advertisement
Advertisement
Advertisement

The GIFCT has gone largely unnoticed by the public since it was established in 2017 by Facebook, Microsoft, Twitter, and YouTube. One reason is that what it does is complicated. (Another may be its “terrible acronym that no one can remember or pronounce,” says Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center.) Yet while onlookers pay close attention to Facebook’s Oversight Board, or its civil rights audit, or Twitter’s warning labels on President Donald Trump’s tweets, the GIFCT is making some of the most consequential decisions in online speech governance without the scrutiny of the public.

The GIFCT started as an industry response to pressures from governments, and especially from European Union legislators, to assume greater responsibility in countering terrorism online after attacks in Paris and Brussels. The basic goal was to coordinate content removal across different services. So together, the tech giants set content norms by creating a hash database of what they consider “violent terrorist imagery and propaganda.” This database, which serves as a kind of blacklist, helps other member sites—many with modest content teams—moderate their own platforms. At least 11 companies are members of the GIFCT, and an additional 13, including small operations such as Justpaste.it, have access to its database. Other than this basic structure, however, the inner workings of the GIFCT are opaque. Even as the coalition transitions to an independent organization, we still don’t know how individual platforms use the database. Are uploads blocked immediately? Do the platforms check each piece of content? It’s unclear.

Advertisement
Advertisement

At its best, a coalition like the GIFCT can serve as a way for services to leverage resources to counter extremist content. And since the GIFCT includes smaller platforms, it may allow those platforms to regulate content that would otherwise skate by unnoticed. The GIFCT also embraces a multistakeholder approach—something that’s often accepted as best practice in internet policymaking—in its recently established Independent Advisory Committee, which includes members from civil society, governments, and intergovernmental bodies.

Advertisement

Yet the GIFCT’s potential dangers, which were apparent to researchers from its inception, are just as clear. And they stem from the same places as its potential.

The GIFCT’s structure typifies what Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet & Society, has termed content cartels, or “arrangements between platforms to work together to remove content or actors from their services without adequate oversight.” Many of the problems with the GIFCT’s arrangement lie in its opacity. None of the content decisions are transparent, and researchers don’t have access to the hash database. As Keller recently laid out, the GIFCT sets the rules for “violent extremist” speech in private, so it defines what is and isn’t terrorist content without accountability. That’s a serious problem, in part because content moderation mistakes and biases are inevitable. The GIFCT may very well be blocking satire, reporting on terrorism, and documentation of human rights abuses.

Advertisement
Advertisement
Advertisement

All of this isn’t so different from what platforms do every day, free from constitutional or democratic legal restraints, but it’s far more impactful, as the GIFCT’s content decisions flow from large platforms to smaller ones. This dynamic isn’t inherently problematic. In an ideal world, according to Keller, platforms would share the ability to find and assess content, while the actual judgments and enforcement of a speech policy would fall to each platform. “In theory, GIFCT should enable that because it doesn’t force platforms to automatically take down whatever it flags,” said Keller, “but it so far doesn’t.” Realistically, small platforms end up following the content recommendations handed down to them, since they don’t have the resources to reevaluate content that may be flagged for review.

Advertisement

Now, those concerns are compounded by the risks of extralegal censorship. Spurred by the Christchurch Call, a political summit of countries and tech companies to combat terrorist content online after the New Zealand mosque shootings, the GIFCT announced in 2019 that it would overhaul its internal structure. One component of that, the Independent Advisory Committee, includes government officials from Canada, France, Japan, Kenya, New Zealand, the United Kingdom, and the United States. Again, in theory the inclusion of government would make the GIFCT’s work more democratically accountable. Yet it could also be an opportunity, as Emma Llansó, the director of the Free Expression Project at the Center for Democracy and Technology, said, for governments to have “just as much power as they usually do and none of the accountability.” On July 30, the Center for Democracy and Technology joined 14 other human rights and digital rights organizations in a letter to Nick Rasmussen, the new executive director of the GIFCT, that outlined concerns about the coalition’s lack of transparency and its use of government participation. “Counter-terrorism programs and surveillance have violated the rights of Muslims, Arabs, and other groups around the world, and have been used by governments to silence civil society,” the letter states. “We want to ensure that the boundaries between content moderation and counter terrorism are clear.”

Advertisement
Advertisement
Advertisement

Government participation in the GIFCT is a sort of Pandora’s box. First, there’s the question of whether platforms will decide to add content to the hash database under pressure from a government. There’s also the issue of governments looking to voluntary industry bodies like the GIFCT in the future to push for censorship that wouldn’t be lawful to pursue otherwise. Even government officials with the best of intentions, Llansó said, will likely face tensions between their work on the GIFCT and what their governments are doing in, say, national legislative processes—especially if they’re considering an online hate speech law, such as Germany’s Network Enforcement Act in 2017. Perhaps most importantly, added Llansó, this entanglement leaves us with a troubling question: How do people hold either the government or the companies accountable for the decisions that the GIFCT makes?

Advertisement

Despite these criticisms of the GIFCT, few seem to think it’s irredeemable. One general consensus is that the organization and its practices would be far more trustworthy if it heeded calls for transparency. In July, the GIFCT did release a transparency report, which details new initiatives that Keller finds promising, such as a tool that allows platforms to say that an image, video, or URL should not be in the database. That could be useful with parody, news content, or borderline content. But many fundamental aspects of the GIFCT remain unknown. Keller’s transparency “wish list” includes granting researchers access to copies of the actual content that the GIFCT blocks, which would then allow researchers to assess bias and mistakes, and to release information about whether a user is notified (and given the chance to appeal) when the hash database blocks their content. Llansó, meanwhile, wants to know what exactly the Independent Advisory Committee does—and to ensure that it remains truly advisory.

Advertisement
Advertisement
Advertisement
Advertisement

Without these changes, among others, researchers and civil rights advocates can’t help but feel disappointment at the missed opportunity for the public to weigh in at a crucial moment in the formation of a new, semiprivate system of online speech governance. Decisions—about internal governance, about content moderation—are clearly being made behind closed doors, rather than through open, transparent, multistakeholder discussions. “I feel like the evolution of GIFCT is a real illustration of just how slippery terms like platform responsibility or platform accountability turn out to be in practice,” Keller said. When politicians demand initiatives like the GIFCT, she continued, it appears as though democratic governments are forcing platforms to act justly and lawfully. “But that’s not what it’s turned out to mean at all in practice,” said Keller. “In practice, it meant that four platforms got together and made this totally opaque, really powerful system that’s applying rules that aren’t the law … to control online speech.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement