In a Thursday blog post, Twitter announced that it had suspended 235,000 accounts since February for violating its ban on violent threats and for promoting terrorism. The company had announced earlier this year that they shut down 125,000 accounts between mid-2015 and February 2016 in an effort to stifle accounts used to promote terrorism.
Public reproach of Twitter has been particularly harsh, with critics arguing that the company is providing a platform for terror groups to grow—one such claim even made it to court, although it was recently dismissed. In April, Jean-Paul Rouiller, director of the Geneva Centre for Training and Analysis of Terrorism, told CNN that social media is vital to modern terrorist organizations: “They would not have been able survive, they would not be able to recruit people. The human touch is always needed, but social media is their shop-window.”
Having been criticized by members of Congress and others for harboring terrorists, Twitter wants to make it clear that it takes the problem seriously. It reports that “response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically.” Importantly, the company has also found ways to “disrupt the ability” of suspended users from returning immediately to the platform.
How is Twitter finding these accounts? It admits that there is no magic algorithm that identifies terrorist content, but it has turned to things like “proprietary spam-fighting tools to supplement reports from our users and help identify repeat account abuse.” Twitter says these tools are responsible for identifying a little more than one-third of the accounts suspended. Twitter has also expanded the teams reviewing reports to help identify potentially dangerous users. Additionally, it has been developing partnerships with organizations that work “to counter violent extremism (CVE) online.” Moreover, the company is working to “empower credible non-government voices against violent extremism,” like Parle-moi d’Islam and True Islam. The company’s actions aren’t entirely generated from within: It also works with law enforcement to help investigations aimed at preventing and prosecuting terror attacks—as long as those requests comply with Twitter’s law enforcement guidelines. (On the bright side, Twitter might even help locate terrorists or assess their successes.)
Although Twitter is working to help reduce the terrorism that its site allows, it will never be able to identify all malignant users. Ultimately, the openness of the platform is what its users like, including those with malicious intentions.