Future Tense

How Much Is Facebook Really to Blame for Ethnic Cleansing in Myanmar?

The notion that it’s wholly responsible may be a bridge too far.

A Rohingya refugee girl looks on at a temporary shelter in New Delhi on Monday, following a fire that broke out at their camp early Sunday that left about 200 people homeless.
A Rohingya refugee girl looks on at a temporary shelter in New Delhi on Monday, following a fire that broke out at their camp early Sunday that left about 200 people homeless.
MONEY SHARMA/AFP/Getty Images

This piece originally appeared in the New America Weekly.

“Facebook has a genocide problem,” the New Republic proclaimed in March. Months after nearly 700,000 Rohingya Muslims were driven from their homes in Myanmar, the potential role of the social media platform in instigating violence against them has come under scrutiny, fueled at least partly by swirling controversy over Facebook’s bad behavior elsewhere.

At a blockbuster Senate hearing last week, Facebook CEO Mark Zuckerberg was questioned about his platform’s role in “inciting the possible genocide” in Myanmar. U.N. human rights investigators have said that Facebook “substantively contributed to the level of acrimony and dissension and conflict” in the country. Even one of Facebook’s top executives has said that he and his colleagues “lose some sleep” over the “real-world harm” their platform has caused in this context.

To an extent, this thinking makes sense. Facebook has outsize influence in Myanmar, where its user base has exploded in recent years, and it exists as the principal internet gateway for many people there. This rapid shift has allowed for increased connectivity and openness in what was previously one of the most repressive countries on Earth. But it has also had a dark side: Facebook has become one of the principal venues for the spread of hate speech targeting the Rohingya and other Muslims.

And yet, even in this context, the notion that Facebook is responsible for what a top U.N. official last year called a “textbook example of ethnic cleansing” may be a bridge too far. Looking at the evidence, there’s reason to believe that while Facebook’s impact in Myanmar has been significant and often problematic, the broad claims implicating it in instigating the specific atrocities of the greatest international concern ultimately present a distorted picture of the dynamics and drivers of the persecution that the Rohingya in Myanmar face.

For one, the narrative that Facebook spurred atrocities against the Rohingya buys into the false notion that recent events in Rakhine state were principally communal violence. Even Mark Zuckerberg fell into this trap. In an interview earlier this month, he cited steps the company allegedly took to stop “sensational messages” aimed at inciting violence between religious communities. However, the most recent—and most brutal—actions against the Rohingya, which began in late August, weren’t spontaneous pogroms. Instead, they were a calibrated military campaign.

Indeed, evidence of participation by some local Buddhists notwithstanding, it was Myanmar security forces that planned and executed what they called “clearance operations,” which killed at least 6,700 Rohingya and drove hundreds of thousands more from their homes. Moreover, the most serious atrocities were far removed from the majority of those sharing hateful messages on Facebook, in areas of Rakhine state where the Rohingya accounted for as much as 91 percent of the population prior to the mass expulsion. Rather than being the responsibility of those whipped up by fake news, this brutality was principally the product of directives and careful planning by a military operating in the absence of civilian oversight.

Moreover, campaigns of this nature aren’t new in Myanmar, where the military has a consistent track record of committing atrocities against ethnic minorities, dating back long before Facebook—or even the internet—was widely available. Violent attacks against the Rohingya, in particular, have been a feature of Myanmar politics for decades.

In 1978, for instance, a Myanmar military campaign drove more than 200,000 Rohingya across the border into Bangladesh. And almost a decade and a half later, in 1991, nearly 250,000 Rohingya were forced to flee Rakhine state during a similar offensive. Both campaigns were characterized by the kind of widespread killing, torture, and rape that defined 2017’s violence, and both occurred long before the advent of Facebook. The Rohingya have also been systematically excluded from Myanmar society for years and vilified in domestic media, which was, until the beginning of this decade, entirely offline.

Of course, in recent years, Facebook has certainly been a venue for the spread of hate speech. But vicious, propagandistic invective has echoed equally loudly from traditional media in Myanmar. Rhetoric demonizing the Rohingya as “Bengali terrorists” appeared frequently in state and private outlets as the “clearance operations” proceeded. In 2016, the state-backed Global New Light of Myanmar newspaper even published an article with a veiled reference to the Rohingya as “detestable human fleas.” Even without Facebook, hateful messages would still have reached millions—perhaps just as quickly and effectively—breeding a climate of hostility toward the Rohingya and reinforcing public support for military actions against them.

In other contexts, including in the United States, a key problem that Facebook has presented is a propensity to enable dangerous fake news and fringe views to go viral. Regrettably, in Myanmar, the problem isn’t that Facebook is enabling fringe views to reach the mainstream, but that mainstream views themselves are already deeply racist and exclusionary. Moreover, the authorities are often the ones peddling misinformation.

None of this excuses Facebook’s failures in Myanmar. The social media giant was undoubtedly a platform for the spread of hate speech, and such invective demonstrably contributed to public sentiment that downplayed, excused, and even praised brutal military action against the Rohingya. In addition, there were steps that Facebook could have taken to address these specific concerns. As a group of Myanmar civil society organizations highlighted in an open letter to Zuckerberg, Facebook’s approach constituted “the very opposite of effective moderation,” failing to speedily address concerning posts, engage local stakeholders, or provide necessary transparency.

Even so, these failures weren’t the key drivers of the main violence in Rakhine state, and insinuating as much plays into the dangerous narrative that Myanmar’s current problem is too much democracy, when the problem is—and has always been—the opposite: a lack of democratic accountability. Facebook was an additional venue through which vicious slander spread, not the source of public animosity toward the Rohingya. It was decades of authoritarian propaganda—the vast majority of it offline—that created the narratives and conditions for this sentiment to grow and fester. And it was ultimately the Myanmar military—operating outside the constraints of public opinion—that carried out the atrocities themselves.

There are things Facebook can do to improve, but taking such steps won’t address the fundamental drivers of persecution and violence against the Rohingya. Broader political and social changes are required for that. Unfortunately for Myanmar, such changes don’t appear to be coming any time soon.