For years, users have been calling on the company to make the site safer. And now, at least according to a recent blog post from Twitter’s General Manager Ed Ho, some of their appeals have been answered.
Back in January, Ho tweeted a thread about the social media network’s ramped-up efforts to tackle the issue, writing, “Making Twitter a safer place is our primary focus and we are now moving with more urgency than ever.” He admitted that the company didn’t move fast enough to address abuse in the past, and said that his team would start speedily rolling out product changes. That particular week, he tweeted, they were introducing overdue fixes to muting, blocking, and preventing repeat offenders from creating new accounts.
On Thursday, Ho’s blog detailed some of the other efforts Twitter has made in the past six months. Among others, he wrote, the company convened a Trust and Safety Council that included “safety advocates, academics, researchers, grassroots advocacy organizations and nonprofits focusing on a range of online safety issues—from child protection and media literacy to hate speech and gender-based harassment” to help Twitter tailor new policies and features. The team also conducted research and made algorithmic changes like producing better search results and collapsing potentially abusive or low-quality tweets.
In the post, Ho seemed confident that the reforms were leading to substantial progress.
“While there is still much work to be done,” he wrote, “people are experiencing significantly less abuse on Twitter today than they were six months ago.”
But we’ll have trust to the company’s word on the metrics. Twitter hasn’t released any internal data yet, though Ho did disclose a few positive measures.
For one, he said, Twitter is taking daily action against abusive accounts at 10 times the rate it did this time last year. It also began imposing temporary limits on abusive accounts that he said resulted to 25 percent fewer abuse reports from those users. Of the accounts put on probation, 65 percent don’t have to be restricted again.
Yet on some other significant measures, the company remains opaque. Twitter hasn’t been as vocal about how other major changes, such as its new “algorithmic timeline,” have changed the nature of abuse, discourse, and engagement on the site. Nor has it addressed why its moderators are still missing some flagrant abusers.
As Slate’s Will Oremus detailed last year, Twitter also hasn’t explained how, exactly, it interprets its “hateful conduct” policy. The site reportedly retrained moderation teams to enforce stricter anti-harassment policies last year. After the November election, it banned several alt-right figures, including Richard Spencer, for espousing racist views—though it declined to say what specific tweets led to the suspension.
Critics argue that this gives Twitter room for double standards. One prominent beneficiary: President Donald Trump. The commander in chief has posted content on his personal account that some think warrant a ban (does a GIF of him body slamming CNN ring a bell?).
In a meeting with journalists at Twitter’s San Francisco headquarters in July, a Recode reporter asked Vice President of Trust and Safety Del Harvey if the company treats Trump like everyone else’s.
“We apply our policies consistently. We have processes in place to deal with whomever the person may be,” Harvey told Recode. “The rules are the rules, we enforce them the same way for everybody.”
In April, Twitter co-founder and CEO Jack Dorsey also told Wired that his company held all users to the same standards, but added that company policy also accounted for “newsworthiness.” He said he thought it was important to “maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.”
Though the post on safety updates this week said that users were experiencing significantly less abuse, it didn’t address whether individuals actually felt safer. Ho wrote that Twitter would continue to solicit feedback. He also said it would remain committed to making the site a safe place for free expression.
Its users will be the judges of that.