Future Tense

How Facebook, Twitter, and Instagram Have Failed on Palestinian Speech

Destruction in Palestine from bombs shot by Israel and a tweet with several rocket emoji from Israel's official Twitter account
Photo illustration by Slate. Images via Mahmud Hams/AFP via Getty Images and Israel/Twitter.

After 11 days of fighting, Israel and Hamas agreed to a cease-fire on Thursday, with both groups claiming victory. At least 243 people, including children participating in a program intended to help them deal with trauma, were killed in Gaza. In Israel 12 people, including two children, were killed.

During the violence, social media platforms allowed some voices to be heard, while others were silenced. On May 11, Twitter temporarily restricted the account of Palestinian American writer Mariam Barghouti, who was reporting on protests against the expulsion of Palestinians from their homes in East Jerusalem. Twitter later said it was an accident. Twitter was also the platform where Israel’s official account tweeted more than 3,000 rocket emoji and said they represented “the total amount of rockets shot at Israeli civilians. Each one of these rockets is meant to kill.” A user replied with more than 100 children emoji, the number of Palestinian kids killed.

Advertisement
Advertisement
Advertisement
Advertisement

Instagram also made significant mistakes. The platform removed posts and blocked hashtags about the Al-Aqsa Mosque, the place where the conflict started, because its content moderation system mistakenly associated the site with a designation the company reserves for terrorist organizations. Facebook, which owns Instagram, announced Wednesday that it had set up a “special operation center” that will be active 24 hours a day to moderate hate speech and violent content related to the Israeli-Palestinian conflict.

To learn more about how platforms have struggled with posts around the latest Israel-Palestine violence, I talked to Dia Kayyali, a researcher who focuses on the real-life impact of content moderation, and related topics. They are the associate director for advocacy at Mnemonic, an organization devoted to documenting human rights violations and international crimes in Syria, Yemen, and Sudan. Our conversation has been edited and condensed for clarity.

Advertisement
Advertisement

Delia Marinescu: Do you think Twitter should have taken action on Israel’s rocket-emoji tweets? If so, what would you have liked to see?

Dia Kayyali: That tweet specifically is offensive, but I don’t think that it necessarily should get removed. It doesn’t necessarily constitute a threat, so I don’t think that on its surface that it actually necessarily violates Twitter’s rules. And also it’s not necessarily spreading misinformation, so it doesn’t necessarily need to be labeled. Now, there are other tweets that I’ve seen where they are justifying their actions—for example, I’m sure you saw the YouTube video that got removed.

Advertisement

That’s the sort of thing platforms need to be paying attention to and probably need to be labeling some of that content as misleading.

Advertisement

Last week, Instagram and Twitter blamed technical errors for deleting posts mentioning the possible eviction of Palestinians from East Jerusalem. Instagram said in a statement that an automated update caused content reshared by multiple users to appear as missing, affecting posts on other topics as well, and said it was sorry to hear that Palestinians felt that they had been targeted. Do you buy that explanation? Does this tell us anything about how these content moderation algorithms work more broadly?

Advertisement
Advertisement

I absolutely do not buy this explanation. If you are going to do some sort of update and you know how people are using your platform, that’s the moment you chose to do it? Absolutely not. It’s also not how they roll out updates. You don’t just roll out an update without testing it in different places, so every time, for example, that Facebook or Instagram makes some small change, like they change the reporting flow, they test that in small place. Even if it were a mistake, which I don’t believe, it still reflects just total negligence toward human rights of Palestinians.

Advertisement
Advertisement
Advertisement
Advertisement

So do you think it was censorship?

Yes, I absolutely believe it was censorship. Censorship means government action, so it’s hard to talk about censorship when we’re talking about platforms. But in this case we know how close Facebook’s relationship is with the Israeli government—we know how rapidly they respond to Israeli government requests. Every public indication is that it’s happening because they’re listening to one side of the story and agreeing with it.

Last week, Instagram also removed posts and blocked hashtags about the Al-Aqsa Mosque because its content moderation system mistakenly associated the site with a designation the company reserves for terrorist organizations. How was that possible? How does content moderation work in a situation like this?

Advertisement

This is, again, unfortunately not a new issue. There is an ongoing issue where they associate certain words that are pretty well-known in our community with terrorists and violent extremists. The fact they keep making that claim over and over again when they are such a huge company with practically limitless resources is really disingenuous. Al-Aqsa Mosque is not the only phrase that has been associated that way. For example, Shaheed appears on the slur list, but it’s also a common name in the region.

Advertisement
Advertisement

Earlier this week, a group of 250 Jewish Google employees called on the company to increase its support of Palestinians amid Israel’s deadly bombing campaign in Gaza. Among other things, they ask Google leadership to reject any definition of anti-Semitism that holds that criticism of Israel or Zionism is anti-Semitic. Why is this letter important, and how do you think Google should manage this situation?

Advertisement

I was incredibly overwhelmed with gratitude to see that letter. I hope that Google responds. Unfortunately I’m not very [optimistic]. Google has kind of been willing to have a lot of bad PR lately around issues like this.

Israeli extremists have formed more than 100 new groups on WhatsApp in recent days to target attacks. Since WhatsApp cannot read the encrypted messages on its service, what kind of measures could this platform and Telegram—which is similar—take?

I think that’s one of the hardest questions to answer in this whole situation. Because we saw something really, really similar in India, where I’ve done a lot of work. Content really encouraging violence is being spread in these sorts of groups. I think some of the same things that were helpful there will be helpful here. In India I think in certain times they put limits on how many times you can forward things, so that helps in the spread of misinformation. But I don’t think there is a technical solution right now that doesn’t harm encryption. To be frank, we know that when law enforcement cares about extremist violence, they are able to infiltrate those groups, and it would not be difficult here. So, it shouldn’t have to be a WhatsApp solution. People want the solution to be in the technology, but this is also a human problem.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

There’s no question that encryption is completely necessary for a lot of human rights defenders. Particularly right now, people inside of Palestine are using heavily surveilled and controlled internet connections. So it’s one of those tools that’s incredibly important and also can be harmful.

Facebook set up a “special operation center” that is active 24 hours a day to moderate hate speech and violent content related to the ongoing Israeli-Palestinian conflict, a senior Facebook executive said Wednesday. What are the most urgent solutions they can implement?

In sort of typical fashion, myself and other advocates who are working on this found out when they made the announcement.

To be honest with you, hearing about this feels a bit like … what’s the point of having a special operation center if you’re going to continue to have these incredibly close relationships with the Israeli government and not take the other side of the conversation seriously? It doesn’t feel like it’s going to be helpful. It feels like it’ll probably result in more removal.

Advertisement

I think they should be a little bit clear on what they’re trying to do with this special operation center. We know that Facebook failed horribly in Myanmar, but once they got a lot of bad press, they did work with Myanmar civil society and they instituted some tools and policies that were helpful. Here, they said they are working with native Arabic and Hebrew speakers. Having appropriate language capacity is always an issue, so it’s good they will have native speakers, but they need to have people that actually speak the appropriate dialect. Arabic, it’s not one language.

Advertisement

As far as Facebook is concerned, most Palestinians are Hamas—that’s how they treat content coming from the region. It’s great if they’re putting more resources on this, but it’s not going to help if they’re not doing it conscientiously to address the problems that civil society keeps bringing up.

Advertisement
Advertisement

What’s needed is really some co-design of policies and more transparency into the policies. So understanding where automation is being used in the process, where automation makes a decision, and where it is a person. So at what point is the bias creeping in? I mentioned the specific example of the word Shaheed appearing to trigger their automated removal. That’s one of the places where they should be working with civil society to make sure they are not accidentally or on purpose capturing things that are going to include a lot of protected speech.

I am curious if the special operation center is intended to rapidly respond to user appeals. That would be something really helpful.

Would you like to add something that I didn’t ask you and you consider important?

Advertisement

The International Criminal Court announced in March that it was going to be conducting an investigation into human rights violations in Palestine-Israel. And Israel made it clear that they are not going to cooperate with the investigation. They are not a member of ICC, but just a few weeks ago the prosecutor warned against potential crimes against humanity taking place. The ICC doesn’t have all of the traditional tools of a court to get evidence. So it has increasingly been considering the use of social media content as evidence. And in fact, in 2017 they issued a warrant for a war criminal based on videos that they found on Facebook.

So the point is that this content is useful and important for a lot of reasons, and one of them may be for actually prosecuting crimes that are happening, or for conducting a thorough ICC investigation. And that type of content is also getting deleted. It’s actually evidentially content that we are talking about.

Update, May 21, 2021: This article was updated out of concern for safety.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement