Future Tense

Meta Meets the Reality of War

Two military trucks in the woods covered in snow.
Abandoned Russian military vehicles in a forest not far from Kharkiv, in the east of Ukraine, on March 6. Sergey Bobok/Getty Images

This article originally appeared in Tech Policy Press.

For those studying how the Russian invasion of Ukraine is reshaping technology policy, Meta’s decision last week to temporarily permit calls for violence against “invading Russians” will stand among the most consequential episodes of the war. This moment shows how public distrust of Western social media platforms can redound to cast their policies in the most sinister light possible. It also provides a window into the impossible decisions that confront Meta and other technology companies as they try to write wartime content moderation policy in real-time.

Advertisement

Here is how events unfolded:

On Thursday, March 10, a Reuters headline announced that, “Facebook and Instagram to temporarily allow calls for violence against Russians, calls for Putin’s death.” On Twitter, the Reuters social media team shortened this headline further. “EXCLUSIVE Facebook and Instagram to temporarily allow calls for violence against Russians,” the tweet read.

Advertisement
Advertisement
Advertisement

This initial Reuters tweet received more than 20,000 shares. Three-fourths of these were quote-tweets: a highly unenviable ratio. Some Twitter users accused Meta of enshrining hate crimes or even willfully abetting a genocide against the Russian people. Others noted how this was par for the course for a company whose services had been implicated in the spread of violence around the world. It seemed to be a powerful indictment of the bigotry and cruelty of Meta executives, playing out along a well-established frame.

Advertisement
Advertisement

Few would notice that the story’s headline was updated shortly after publication or make note of the more nuanced details in the body of the article. The leaked content moderation guidance described a change in the enforcement of “T1 violent speech”—a content category that Meta typically removes without exception. For a limited time, the new guidance said, Meta would permit calls for violence against the Russian and Belarusian dictators, Vladimir Putin and Alexander Lukashenko, so long as the violence was nonspecific (i.e., without referring to a specific violent plot). Meta would permit calls for violence against Russian soldiers, “EXCEPT prisoners of war.” Finally and most controversially, Meta would permit calls for violence against “Russians” when it was “clear that the context is the Russian invasion of Ukraine.” Broader calls for violence against Russian people would still be prohibited per Meta’s Hate Speech policy. The guidance applied to Ukraine and several neighboring Eastern European nations, as well as Russia itself.

Advertisement
Advertisement

On Friday, the Russian government pounced, taking advantage of the outrage generated by the original headline. Russian authorities launched a criminal investigation of Meta, while the Russian Attorney General’s office formally requested that Meta be declared an “extremist organization.” Meanwhile, the Roskomnadzor, Russia’s state censorship agency, announced that it would block Instagram beginning March 14.

Advertisement
Advertisement

The initial Reuters headline and tweet were a clear example of media failure. For the Russian government, they were an unexpected gift. Although the Roskomnadzor banned Facebook on March 4 and hinted at more drastic moves to come, this had been easier said than done. Historically, Instagram has been quite popular in Russia. On Instagram, it is easy for users to engage with entertainment and cultural content while steering clear of political issues that might only imperil them (for many of the same reasons, Instagram has remained a similarly popular platform in Iran.) As of 2021, roughly 60 million Russians had an Instagram account—about 40 percent of the country. Had the Roskomnadzor simply blocked Instagram as a preemptive war measure, it might have provoked widespread fury. However, by citing the misleading Reuters headline, the Russian government was able to recast its act of censorship as a necessary defense against Meta’s xenophobic “extremism.”

Advertisement
Advertisement
Advertisement
Advertisement

The full impact of this episode is yet unknown. Should the Russian government formally declare Meta an extremist organization, access to WhatsApp—by far Russia’s most popular messaging service and the foundational communications platforms for millions of Russian families—will be similarly decimated. Ukrainian President Zelensky’s public praise of Meta for “[standing] side by side” with Ukraine in the “fierce battle in the informational space” is likely to accelerate this break even as it improves Meta’s reputation for tens of millions of users. Meanwhile, Meta’s continuing attempts to reverse and reinterpret its wartime policies—now emphasizing that the platform will remove any calls for violence against Putin or Lukashenko—may enable the company to safeguard the last vestiges of its presence on the Russian internet for a few more crucial weeks.

Advertisement

Looking beyond the fallout from the misleading Reuters headline, Meta’s leaked content moderation guidance represents the most direct articulation to date by Meta (or any company) of a wartime content moderation policy. That said, it appears in line with the tacit positions adopted in past conflicts. Meta does not appear to have ever removed widespread calls for violence against the Taliban by Afghan users during the fall of Kabul in 2021, for instance, or by Iraqis against the self-proclaimed Islamic State during the reconquest of Mosul in 2017.

Advertisement
Advertisement

In the 2020 war fought between Armenia and Azerbaijan—the clearest example of an interstate conflict having previously played out on social media—both Armenian and Azerbaijani Facebook users spread content that glorified violence against the opposing nation’s military. Meta removed war content only when it sought to dehumanize or delegitimize entire groups of people, as in the case of an Armenian propaganda video that portrayed Azerbaijan as a “fake,” backward, Islamized nation. Meta’s Oversight Board reviewed and upheld this content removal decision, noting that the content was intended to stir hatred on the basis of national origin. Meta’s leaked Russia-Ukraine policy guidance essentially followed this same norm.

Advertisement

While it is uncomfortable to see a social media platform expressly allow calls for violence, it is also important to consider what alternatives actually exist. Russia is currently prosecuting a war of aggression against Ukraine, one in which it has decimated Ukrainian cities, bombed hospitals, and undertaken the extrajudicial murder of Ukrainian citizens in regions that it occupies. If Meta had not changed its policy, it would be the job of Facebook and Instagram content moderation teams to remove any speech in which Ukrainians expressed fury against Russia or in which they celebrated the effectiveness of their own military in killing Russian invaders. Given the volume of such content, Meta would likely need to automate the task, using machine detection to identify, flag, and possibly remove Ukrainian speech that referenced the ongoing invasion.

Advertisement
Advertisement
Advertisement

Such a censorship regime—to the extent that it could be effectively enforced, given the likely volume of violative content—would make it essentially impossible for Ukrainians to discuss current events. Ironically, such censorship would also directly complement Russia’s war aims, in which Russia has sought to suppress as much news and first-hand accounts of the invasion as possible.

Advertisement
Advertisement

Indeed, Meta’s subsequent attempts to clarify and narrow the scope of its policy guidance demonstrate the inadequacy of any wartime content moderation policy. On March 11, Meta’s president for global affairs, Nick Clegg, announced that the guidance to moderators would now apply only to individuals in the nation of Ukraine. This decision was likely intended to allay the concerns of the Russian government as well as those of ethnic Russians across Eastern Europe, who understandably fear scapegoating and harassment. However, the tweak also carried its own absurd implications. Under a reasonable reading of the new policy, a Ukrainian in Ukraine would be within her rights to mourn the murder of her family and call for vengeance against the Russian military. But if that same Ukrainian wrote these things from a refugee camp in Poland, she might be liable to have her content removed or her account banned for using unacceptably harsh rhetoric.

Advertisement
Advertisement

Meta’s struggles demonstrate an irreconcilable tension in trying to adapt content moderation policy to major conflict. Meta’s mission statement is to “give people the power to build community and bring the world closer together,” and content moderation exists to stem the spread of violent and hateful content. But wars are exercises in violence, fueled by cycles of hate. Accordingly, social media companies will never be able to write a sufficiently nuanced wartime content policy that somehow elides violence, hate, and death. The only solution is the end of the war itself—something over which platforms have vanishingly little power.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement