Future Tense

Should Facebook Let the Taliban Post?

In this photo taken on Feb. 6, 2019, an Afghan reporter browses the Twitter page of Taliban spokesman Zabihullah Mujahid in the newsroom at Maiwand TV station in Kabul.
In this photo taken on Feb. 6, 2019, an Afghan reporter browses the Twitter page of Taliban spokesman Zabihullah Mujahid in the newsroom at Maiwand TV station in Kabul.  Wakil Kohsar/AFP via Getty Images

On Aug. 28, Adnan Kakar, editor of the popular left-leaning Urdu-language Pakistani blog Hum Sub, woke up to find his Facebook profile permanently disabled. Facebook had also banned the Hum Sub Facebook page, followed by more than 100,000 people, from promoting content for 60 days.

Their crime? Sharing articles about the Taliban in the Urdu language condemning the group, along with a photo of Mullah Omer, the founder of the Taliban. Facebook notified Kakar and his colleagues that they had violated community standards related to dangerous individuals and organizations—even though by late August, the Taliban was fully in control of Afghanistan.

Advertisement

Social media companies are still trying to figure out what to do with Taliban-related content on social media after their takeover in Afghanistan. Now that the Taliban have announced a government, for instance, will Facebook continue to ban those who support it? This issue points to the challenges U.S.-based companies attempting to moderate content worldwide.

Advertisement
Advertisement
Advertisement

Some of the confusion around how social media companies should treat the Taliban stems from the U.S. government itself. Though the Afghan Taliban are on the U.S. Sanctions list under the Global Terrorism Sanctions Regulation, they are not designated as a Foreign Terrorist Organization by the State Department’s Bureau of Counterterrorism. Despite the Afghan Taliban qualifying for the Foreign Terrorist Organization list by virtue of attacking civilians and U.S. military (and indeed the Pakistani wing, named Tehreek-e-Taliban Pakistan, is), it was never added for political reasons: It controlled several territories in Afghanistan, and designating it as a Foreign Terrorist Organization would make it difficult for both the United States and the previous Afghan government to hold peace talks and negotiations with them.

Advertisement
Advertisement

These designations affect how social media platforms approach the Taliban, though each has a different approach. Facebook has been the most stringent in blocking content related to the Afghan Taliban. It divides its Community Standards related to “Dangerous Individuals and Organizations” into three tiers, with the first being most stringent. Facebook says it does “not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook”. The section of Tier 1 relevant to the Afghan Taliban states: “including terrorist organisations, including entities and individuals designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs).” The rule goes on to say, “We remove praise, substantive support, and representation of Tier 1 entities as well as their leaders, founders, or prominent members.” YouTube’s ban on the Afghan Taliban is similar to Facebook’s and blocks any content uploaded by or in support of it.

Advertisement
Advertisement

Twitter, on the other hand, has allowed more freedom to the Taliban, allowing their spokespeople to maintain a presence there and issue press statements. Twitter’s “Violent Organizations Policy” states, “There is no place on Twitter for violent organizations, including terrorist organizations, violent extremist groups, or individuals who affiliate with and promote their illicit activities.” But Twitter moderates content on a post by post basis rather than a blanket ban on the organization.

Advertisement
Advertisement

In the past, Twitter has been under fire from members of Congress regarding its policies toward the Palestinian group Hamas and the Lebanese group Hezbollah. In October 2019, a bipartisan group of representatives called on Twitter “to stop blatantly violating U.S. law, to immediately change their policy, and to remove all content from Foreign Terrorist Organizations and affiliated profiles, including Hamas and Hezbollah, by November 1st.” Twitter responded by saying it respected U.S. law and that it made an exception for “groups with representatives who have been elected to public office through elections, as is the case with parts of Hamas and Hezbollah.”

Advertisement
Advertisement

These policies raise important questions about social media companies’ often self-contradictory application of their content moderation policies.

For one thing, it is important to note that Twitter suspended former President Donald Trump’s account while he was still in office for inciting violence, but the Taliban—who have routinely incited violence against Afghans and U.S.—was allowed to operate on Twitter and only subjected to a post by post scrutiny. Facebook also suspended Trump. If social media companies can blacklist the president of the United States, can they not also choose to not abide by U.S. foreign policy alignments in content moderation, which potentially violate international human rights law?

Advertisement
Advertisement

Facebook itself seems baffled as to how to proceed. Company representatives speaking to media recently have said that they may continue to ban content from the Afghan Taliban even if the United States takes it off the sanctions list, but another Facebooker speaking to the BBC cited the U.S. sanctions list as having guided Facebook’s decision to ban Taliban’s content. It can be deduced that the decision to ban the Taliban and discussing it is rooted in U.S. foreign policy, but the decision to take it off the list is not. What, then, is Facebook guided by, and is the policy implemented across the board? Facebook needs to be transparent.

Advertisement
Advertisement

As it is, the Wall Street Journal recently reported that Facebook has a special list of VIP Facebook users, called XCheck, including politicians, athletes, and journalists whose posts are given special treatment and not subjected to review for the first 24 hours the same way ordinary users’ are. According to the Wall Street Journal, when Facebook’s Oversight Board—created to be an independent body on content moderation appeals—asked for information on XCheck during its review of Facebook’s decision to suspend Trump, the platform denied the list’s existence. There is no point in having the Oversight Board if Facebook is going to use it to pretend to be neutral in content moderation while withholding critical information from it that can lead to informed decisions.

Advertisement
Advertisement

One cautionary tale here is what happened after Twitter deleted a tweet from Nigerian President Muhammadu Buhari for using abusive language and threatening violence against a secessionist movement. Nigerian journalist and novelist Adaobi Tricia Nwaubani has termed Twitter’s deletion of the president’s post as neo-colonial, as an American company decided what a Nigerian president could communicate to citizens who elected him. In response to Twitter’s censorship of his post, Buhari banned the entire social network from Nigeria. This has led to cutting the access of millions of Nigerians to Twitter which they rely on for speech and business. Social media companies need to carry out detailed human rights due diligence before making such decisions where in trying to help a population, their actions end up being counterproductive.

Advertisement

Another challenge is that social media companies are making global policy decisions based on U.S. Foreign Terrorist Organization and sanctions lists that are by definition intended to achieve U.S. foreign policy interests. As a result, people across the globe, especially those in politically contested areas such as Palestine and Kashmir, end up being silenced. For example, 7amleh, a Palestinian digital rights group, has documented more than 500 accounts that were silenced for sharing content related to forced evictions by Israeli authorities. Similarly, social media companies have been accused of giving in to pressure from the Indian government under its IT Rules and censoring Kashmiri activists and organizations. This is a grave violation of international human rights, and civil and political rights under Article 19 of the universal Declaration of Human Rights and Article 19 of the International Covenant of Civil and Political Rights.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

There are also double standards on part of Facebook toward content in the global south as opposed to in the West, as pointed out by the Digital Media Alliance of Pakistan in a recent letter to Facebook. Facebook is quick to take action against posts—even condemnatory ones—about the Afghan Taliban in Pakistan. But the same content remains allowed for users in the United States. For example, Pakistan-based journalist Adnan Rehmat faced temporary suspension by Facebook for sharing the post of a U.S.-based journalist who faced no such consequences. Furthermore, Sirajuddin Haqqani, an Afghan Taliban leader now in the Afghan cabinet who has a $10 million bounty from the State Department and is on the sanctions list, wrote an op-ed in the New York Times in February 2020, and the New York Times did not face consequences from Facebook for sharing it.

Advertisement

There is certainly no easy way to moderate content on social media, especially that related to dangerous and violent organizations and governments. Moving forward, however, there are some glaring issues that social media companies have to address. They must enforce their community standards equitably across different regions and languages of the world. They should also reconsider following U.S. foreign policy to guide their policies. They must be more transparent about their content moderation policies, including publishing lists of individuals and organizations they elect to ban along with explanations as to why they are doing so. Without these changes, they cannot be surprised when people see them as doing the bidding of the most powerful governments in the world.

Update, Sept. 14, 2021: This article was updated to include information about Facebook’s XCheck list reported by the Wall Street Journal.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement