Online harassment always seems to be in the headlines. There was Slack, which recently launched an ill-conceived update that could let you direct message anyone with a Slack account. While users could disable this feature, Vice discovered email displaying the message was sent to users and the message notification would still appear on someone’s Slack, even with the global messaging feature disabled. After people pointed out this feature could be weaponized to harass people, Slack pulled the feature and apologized. In a more acute incident, in March, Fox News’ Tucker Carlson berated New York Times journalist Taylor Lorenz over a tweet she had written earlier that day—a tweet in which she simply asked for people to support women who are facing online harassment. Carlson’s response was to say that Lorenz has nothing to complain about because she has “one of the best lives in the country.”
Let’s be clear: Harassment is a threat to journalists, especially journalists of color, women, and nonbinary people. Reporters receive repeated onslaughts of abuse, death threats, and rape threats. This harassment harms people in real, tangible ways, and journalists routinely face all different kinds of harassment related to their jobs. Journalists have been killed all over the world for what they report. The Committee to Protect Journalists found in a 2017 study on murdered and slain journalists that in at least 40 percent of cases, those journalists had received online harassment and threats leading up to their deaths. A 2019 CPJ survey found that 70 percent of respondents had experienced safety concerns on the job, and 90 percent indicated that online harassment is one of the biggest threats to journalist safety. Amnesty International’s report “Toxic Twitter,” which focused on female journalists and politicians in the U.S. and the U.K., found that Black women experience the most harassment online.
In 2019 and 2020, we led a research project along with Elyse Voegeli and the Harvard Kennedy School looking at online harassment that journalists face and how platforms can be redesigned to alleviate some of digital harms and harassment. We began with exploring the misuse of trust on platforms and how design harms users, in particular journalists. We chose journalists because, by the nature of their job, they have to stay online to find, report on, and uncover breaking news. We ran two surveys of 230 designers and 81 journalists on the design of digital spaces, trust, and dark patterns. We conducted two workshops with 20 journalists in Mexico City affiliated with the Online News Association and co-designed new features in those workshops. We also interviewed 31 journalists in China, Hong Kong, Iran, Palestine, Malta, Guatemala, Afghanistan, the United Kingdom, Canada, the United States, Pakistan, India, Nigeria, Germany, Romania, and Mexico.
We found that journalists have been targeted by terrorist organizations, governments and other state actors, white supremacists, and ordinary readers, often from their own cities. The harassment includes doxxing, hacking attempts, death threats, rape threats, antisemitism (regardless of the target’s actual religion), and repeated harassment about their looks and intelligence. Some harassment campaigns became widespread enough to be listed as trending events on Twitter in their countries.
And when journalists find themselves to be the target of such harassment, there’s often very little they can do. Facebook, for example, considers journalists to be public figures. A private individual is generally protected from harassing content that directly mentions the individual, while some of that speech may be allowed if the target is a public figure. Ultimately, the platform believes that engagement on public figures’ pages isn’t just about the figures themselves, but can be “general conversation.” For instance, someone threatening to dox a private individual or writing that they hope something bad or violent happens to that person is considered harassment and should be taken down. But historically, that kind of harassment is allowed against public figures. Generally, Facebook says it will remove harassment like threats and direct attacks against public figures; however, in recently leaked internal documents, the Guardian found that Facebook allows public figures to receive some kinds of targeted harassment, including “calls for their death.”
Facebook in September 2020 changed its harassment policy to add more nuance and protections for public figures, including “involuntary” ones (such as people from viral content like Alex from Target—remember him?). However, most journalists we spoke with said they are still receiving harassment. This kind of inconsistency and asymmetry of platforms mitigating harm and being unable to recognize harassment was something mentioned repeatedly by nearly all the journalists interviewed in our research.
There are lots of security tips, trainings, and guides for journalists to help better protect themselves online. But even if individuals protect themselves, they can’t fix harassment as a phenomenon, or even stop the harassment they face. The real systemic change needs to come from companies. To truly protect journalists, the policy and design of platforms need to change.
There are lots of things platforms could do right now to help. For one thing, they can do more proactively to research these threats themselves and more actively and transparently work with researchers, civil society, and academia on harassment-related issues.
More specifically, platforms need to give more control to journalists on social media. That means creating a new category—beyond “public” and “private” figures—that would allow journalists to access more nuanced privacy settings for safety without looking like they’ve shut down their account. As we saw in cases of coordinated harassment, during Gamergate and now in Clubhouse, often when a victim goes private, it signals to harassers that they’ve “won.” Instead, platforms should let journalists clean up their mentions—by which we mean allowing them to delete content that tagged them, not just mute or block—and create better filtering systems of their own choosing so it reflects the specific harm they are facing. This goes beyond keyword blocking and would be almost like being able to sort, gather, and select emails all from a single sender. Imagine if users could clean up their mentions in a similar way by batch reporting of tweets, batch muting, and batch blocking of users. Right now, by contrast, you have to do that tweet by tweet, user by user.
Along the same lines, platforms should give journalists (and any other user) the ability to create one cohesive report that pulls together multiple examples of harassment, a list of users, and even links from other websites. This way, if people are using Reddit to coordinate harassment of someone on Twitter, a moderator would be able to see a fuller view of what’s happening. Platforms also need to create options for journalists to open a harassment report as a draft and add to it over time.
At the same time, platforms need to enhance options for users to contest or reopen reports and improve transparency so users can see why a report was rejected. Many of the journalists we spoke to mentioned how many of the harassment reports they had filed were concluded to be “not harassment,” even though they included antisemitic statements and egregious threats.
Lastly, we recommend a specific suggestion from a journalist at our Mexico workshop who became a trending hashtag and then faced death threats online: Currently, you can only report a hashtag on Twitter for being spammy or harmful. Harassment of journalists is harmful, yes—but Twitter would get much more detailed information if it allowed for more specific categories like doxxing, misinformation, targeted abuse, and so many other things.
Everyone, regardless of their profession, deserves to be safe online. But Black, brown, Indigenous, Asian, nonbinary, trans, and women journalists especially should not have to deal with harassment just to work in the media.