A European Court Decision May Usher In Global Censorship

A fight over a mean Facebook post about an Austrian politician will have worldwide consequences.

Eva Glawischnig-Piesczek and a Facebook-like thumbs-down symbol
Austrian Green Party member Eva Glawischnig-Piesczek.
Photo illustration by Slate. Images by Facebook and Die Grünen/Wikiped

On Thursday, the top European court dealt a major blow to free speech, paving the way for a single nation to act as a global censor and require that online platforms act as its minions in doing so.

Specifically the European Court of Justice ruled that a single EU country (in this case Austria) could demand an online provider (in this case Facebook) to take down an objectionable post, monitor its site for equivalent content, and take down those postings as well. And it says a country could do so on a global scale, regardless of where the poster or the viewer is located. In so ruling, the court demonstrated a shocking ignorance of the technology involved and set the stage for the most censor-prone country to set global speech rules.

The case stems from an April 2016 Facebook post, in which a user shared an article featuring a photo of Eva Glawischnig-Piesczek, then-chair of Austria’s Green Party, along with commentary labeling her a “lousy traitor,” “corrupt oaf,” and member of a “fascist party,” apparently in response to her immigration policies. This would be core, protected speech in the United States, the kind of political speech the First Amendment is designed to protect. But not so in Austria. Glawischnig-Piesczek asserted that she had been defamed and, with the backing of an Austrian court, demanded that Facebook delete the post.

Facebook eventually did so, but in a geographically segmented way. The post was unavailable to users who accessed it from Austria but could be accessed elsewhere. Glawischnig-Piesczek thought this approach insufficient. She demanded that Facebook not only take down the specific post she had identified but also look for and delete “identical” and “equivalent” posts—and on a global scale. In other words, no one anywhere should be permitted to view the particular post. And no one anywhere should be permitted to post or view any equivalent attack on her character.

Today’s ruling effectively gives the Austrian court—and any other EU member state—a green light to issue the kind of global order Glawischnig-Piesczek desires. Notably, the ruling comes on the heels of another ruling in a case that involved very similar issues but had quite different results. That case centered on the geographical reach of the right to be forgotten—the idea that to protect individual privacy, search engines have an obligation to remove unwanted results that come up in the connection with the search of an individual’s name. The question was how far that obligation extends geographically. France wanted Google to delink objectionable content globally, whereas Google argued that the EU should not impose its particular balancing of privacy and speech rights around the globe. The European court sided with Google, concluding that EU law did not require global implementation of the right to be forgotten.

But as I warned at the time, that ruling also set the stage for the judgment today. In the right-to-be-forgotten ruling, the court concluded that EU law does not require global delinking or takedown orders, but it left open the possibility that individual member states could issue such orders under their domestic laws, so long as the orders were compliant with other aspects of EU law. That’s why Thursday’s ruling is surprising. While one might conclude that Austria could, under EU law, order Facebook to take down a particular post globally, the obligation to look for and take down additional posts seems to run headlong into other provisions in EU law.

Specifically, the EU’s e-Commerce Directive prohibits member states from imposing general monitoring obligations on social media sites and other online providers, rightly so. Government-imposed monitoring raises an array of privacy-related concerns in addition to the obvious speech concerns—something that one would think the EU would be particularly concerned about, given its strong focus on protecting individual privacy and data protection.

The court nonetheless concludes that the prohibition on general monitoring does not apply to monitoring in connection with specific cases—including the obligation to monitor for identical and equivalent posts associated with a particular case. The court recognizes the concern about general monitoring but says that is addressed if there is sufficient clarity as to what kinds of equivalent content would qualify. According to the court, if there is sufficient clarity, then companies like Facebook would be freed from having to make the kind of “independent assessment” that would raise concern. They could simply carry out the takedown requirements with “automated search tools and technologies.”

This is a remarkable—and highly erroneous—claim. As Kate Klonick and I wrote previously, the court is presuming a level of technological sophistication and degree of specificity that simply do not, and likely never will, exist. Even applying this to identical posts is challenging. It’s not entirely clear how companies are supposed to determine what is identical, unless the criteria for this is limited to shares of the precise post with the precise picture and precise words. With equivalent posts, the line-drawing is exponentially harder. What if the same language, but a different order? What if the same language but no photo? Or a different photo? What if two out of the three critiques are quoted—“lousy traitor” and “corrupt oaf”—with no mention of alleged fascist tendencies? It seems inevitable that a platform like Facebook would have to engage in “independent assessment,” because for a court to define what is equivalent with sufficient specificity seems nigh impossible.

And even if the scope of equivalent content is precisely specified, the context matters. The court says it is concerned the dissemination of “content of which, whilst essentially conveying the same message, is worded slightly differently,” thereby perpetuating the same alleged harm. But what if the discussion is not a critique of Glawischnig-Piesczek but a parody of her? Or a critique of her critics? No technological tools can passively make those assessments. There is, after all, a reason that Facebook and other social media companies have hired tens of thousands of humans to review flagged posts in an effort at effective “content moderation.” Even in cases in which particular content is identified by technological tools, humans need to assess context and meaning.

If platforms do exactly what the court suggests—take down any post with a combination of particular words of concern—it would almost certainly sweep in large quantities of totally harmless and legitimate speech. We are talking about global censorship on a potentially broad scale, and a severe ossification of public debate and discourse as a result.

To be sure, the European court’s ruling simply paves the way for the kind of global monitoring and takedown obligations of concern. The case now goes back to the Austrian courts, which still have to determine the scope of any injunction. There is thus some room for hope. The court did, after all, say that it did not want Facebook, or other online platforms, to have to do independent assessments of content. Ideally, the national courts will realize that there is no way to monitor for equivalent content without running afoul of that principle.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.