On Tuesday, Twitter released a blog post announcing a set of new policies targeted at mitigating harassment on its platform. My colleague David Auerbach has already parsed the new rules for clues on how Twitter is balancing its interest in user safety and its commitment to freedom of expression. I’m just as interested in the medium as I am the message.
Have you ever noticed that Twitter executives prefer to talk about harassment on Twitter just about anywhere except on Twitter? Twitter CEO Dick Costolo took questions from Twitter users on CNBC last year; more than a quarter of the queries concerned Twitter abuse, but Costolo didn’t address any of them. The first indication that Costolo considered harassment on Twitter a real problem was from an internal memo that leaked this February in which Costolo wrote, simply, that Twitter “sucks at dealing with abuse.” He later submitted to a Q&A with the New York Times, and last week, Twitter general counsel Vijaya Gadde contributed an op-ed in the Washington Post acknowledging that the platform’s approach to harassment has been, “to put it mildly, not good enough.”
And yet: Costolo hasn’t tweeted about the new harassment rules, though he did retweet Ryan Seacrest the other day. Peruse the official @Twitter account, and you’ll find a lot more enthusiasm for promoting the new Star Wars emoji or spreading the #Coachella hashtag than clarifying the platform’s new anti-harassment rules. The blog post drafted by Twitter director of product management Shreyas Doshi to explain the changes clocked in at under 500 words; Twitter HQ had more to say about how its users can best capitalize on the #NBAPlayoffs hashtag. This pattern does not suggest a tremendous amount of faith in Twitter’s ability to host civilized discourse on important issues.
Twitter users, on the other hand, are quite eager to use Twitter to talk about harassment on Twitter. Every time Twitter execs release these little morsels about their evolving approach—some statements in the Washington Post here, some info on a blog there—it inevitably sows confusion and speculation among users, and that discord is expressed overwhelmingly on Twitter. This presents a golden opportunity for the company. The fact that Twitter users are complaining about Twitter on Twitter is a sign that they value the platform highly enough to invest in making it better and that they find it useful enough as a communications tool to share their ideas there. Otherwise, they’d just delete their accounts.
Thousands of regular people are tweeting about online harassment in the hopes of having some influence over a platform that, for many, has become their community. Costolo should be eager to host a dialogue about the new rules on his own platform, where he can answer simple questions, address concerns, and nod to the many fabulous ideas that are being sourced from Twitter users themselves. I can think of no greater confirmation of Twitter’s promise as a democratic communication tool.
Until that happens, I’ve got a few questions here. Twitter’s new rules have widened its definition of abusive speech and expanded its suite of tools for dealing with harassing users. That suggests that Twitter is willing to get creative in tackling harassment, which is great, but it’s also made Twitter’s disciplinary process even less transparent. Twitter is rolling out a new program designed to automatically identify abusive tweets and prevent them from reaching their intended target. Other abusive tweets could cause the offending user to be suspended from his account for a 12-hour period. Others could trigger a process where a harasser would be required to delete certain tweets or verify his phone number in order to get back in to his account. Other abusive tweets could lead to permanent suspension. But which tweets warrant which punishment? And how can a harassed user know whether Twitter is really taking the steps it’s promised to protect her? I’ve asked Twitter to better clarify the distinctions, and I’ll update if I hear back.
Here’s another question: Why not make that new harassment screening tool optional? I’m sure some people would prefer not to see a death threat appear in their feeds, but personally, if some online obsessive is publicly stating his intent to kill me, I’d like to know. That knowledge can empower me to protect myself by barring certain people from my public events, or more carefully screening my emails and calls, or contributing to the pile of evidence I might need to secure a restraining order against a harasser. And besides, Twitter harassment can cause harm even if it never reaches its target. We’re not talking about a threatening tweet falling in the forest here; threats posed in public are specifically designed to stoke communitywide fears and forge solidarity with like-minded abusers.
Caroline Sinders is an interactive designer, digital anthropologist, and one of many Twitter users who tweeted suggestions Tuesday about how Twitter can make its screening tool even better. She’s proposed that Twitter take a cue from the typical email spam filter and allow users to check in periodically on what sorts of materials are being whisked out of their sight—and to restore harmless messages if they choose.
Part of what makes Internet harassment so damaging is the feeling of powerlessness it produces in its victims. No matter how Twitter decides to crack down on harassment on its platform, it can help alleviate that feeling by being transparent about its processes and opening itself up to user critiques. Right now, it looks like the platform that’s ostensibly so committed to fostering speech is afraid of a little dialogue.