When CNBC invited Twitter users to ask questions of Twitter CEO Dick Costolo last month, thousands of people chimed in with queries like, “Why is reporting spam easy, but reporting death and rape threats hard?” and “Why are rape threats not a violation of your ToS?” According to CNBC, more than 28 percent of the 8,464 questions submitted to the network concerned harassment and abuse on Twitter. But when Costolo appeared on CNBC’s Closing Bell, he didn’t address the problem of online threats. Instead, he fielded questions like, “How many fake accounts does Twitter have, and be honest?” and “Why is there no edit feature to fix typos?”
The company’s typical response to complaints about abusive and harassing behavior on Twitter is to advise users to fend for themselves. The network tells abused individuals to shut up (“abusive users often lose interest once they realize that you will not respond”), unfollow, block, and—in extreme cases—get off Twitter, pick up the phone, and call the police. Twitter opts to ban abusive users from its network only when they issue “direct, specific threats of violence against others.” That’s a criminal standard stricter than the code you’d encounter at any workplace, school campus, or neighborhood bar.
What this approach fails to recognize is that online harassment is a social problem (one that disproportionately affects the same folks who are marginalized offline, like minority groups, LGBT people, and women), and making the Internet a safe and equitable place to communicate requires a social solution. So now, some Twitter users are stepping up to provide ad-hoc fixes where Twitter itself has declined to dabble. On Monday, Jacob Hoffman-Andrews, senior staff technologist at the Electronic Frontier Foundation, unveiled Block Together, an app that is “intended to help cope with harassers and abusers on Twitter” that allows users to “share their list of blocked users with friends” and, if they like, “auto-block new users who at-reply them.” (Twitter itself doesn’t even allow users to see users they’ve blocked in the past, much less share the list with others.) Hoffman-Andrews hopes the software is simply “useful enough to be interesting,” and that it represents the start of Twitter community members banding together to take back the platform from the trolls.
Hoffman-Andrews isn’t the first to see an opportunity in Twitter’s ambivalence. He says Block Together was inspired by Flaminga, an in-development Twitter app created by Cori Johnson that helps Twitter users conspire to create secret mute lists they can share with one another to silence users they don’t want to hear. With Flaminga, a user could create a list that instantly mutes all the Twitter accounts that have called her a sexist slur, then share it with friends who would also prefer not to be called a bitch for airing their opinions or reading the morning news. Or she could start a “mansplainer’ list for identifying users who are not outright bigots but are simply too irritating for verbal sparring. Flaminga also offers filters that allow users to mute a user and all of their followers (to avoid paging through a concerted pile-on attack) or to mute Twitter accounts created too recently (to silence users who try to evade blocks by tweeting under a fresh username). Block Together also takes a cue from the Block Bot, an app that identifies Twitter’s “anti-feminist obsessives” (they’re nominated for inclusion by a group of trusted, preapproved users), sorts them into categories of offensiveness ranging “tedious and obnoxious” to “abusive bigot,” and allows users to pick the level of vitriol they’d like to excise from their Twitter feeds.
All of these applications offer a valuable service, and their shared impulse to crack down on Twitter abuse through community solutions is commendable. (Riot Games, which publishes the vastly popular multiplayer game League of Legends, has seen harassment plummet and positive interactions rise among players after instituting a community approach for dealing with abusive behavior). But without Twitter’s cooperation, these developers are still focusing on selected users instead of addressing the problem on a site-wide level. Sharing my block list with my followers might alert a few people to a few bad apples, but all that will accomplish is offering a handful of people the option to block some vile tweets from view. This is, ultimately, in service of Twitter’s preferred solution—that users ignore abuse, pretend stalkers don’t exist, avert their eyes from harassment, and don’t bother Twitter HQ.
These apps won’t actually inspire Twitter to shut down the serial abusers who use their Twitter accounts to harass and threaten women. They won’t help attract serious legal attention to their crimes. And they won’t compel Twitter to instruct its brilliant developers to imagine new sitewide solutions for the problem, or else lend its considerable resources toward educating government officials and law enforcement officers about the abuses its users are suffering on its network. Right now, Twitter doesn’t even have the basics down: University of Maryland law professor Danielle Citron, writing about a recent lawsuit filed against Facebook for ignoring revenge porn on its site, suggests that social networks can begin to serve harassed users by hiring more employees to sift through complaints instead of assigning the task to robots; prioritizing reports of threats over reports of spam; notifying users of the outcome of their complaints; and—above all—actually communicating with users on this issue.
Until that happens, I imagine Twitter’s users will still have a few questions for the CEO.
Update, Aug. 5, 2014: This post was updated to include links to quoted tweets.