Twitter bots—automated accounts that send the same message to different actors—are proliferating on the platform. Used by a range of actors, from pro-government Syrians to private companies, these automated accounts spam our timelines with inane or useless information, usually for the purpose of advertising or to crowd out a certain popular narrative.
These bots are, at worst, annoying. When targeted at a particular hashtag (such as #Bahrain), they can have the effect of drowning out more pertinent information. But they can easily be reported as spam and, as such, tend to quickly disappear from Twitter timelines and search.
For this reason, as well as others, Philip N. Howard’s recent proposal to build “pro-democracy Twitter bots” is perplexing. Noting the use of pro-government bots from China to Venezuela, Howard proposes a new type of propaganda, suggesting these bots will “[expand] the news diets of people in other countries” and could be used to “[critique] tough dictators.”
If, as Howard suggests, these bots were merely used to tweet links to engaging content into the ether (rather than directly to the accounts of individual users), then they would be innocuous, surely, but ineffective. Twitter is all about engagement, and few people are interested in following an account that tweets links but interacts with no one (@horse_ebooks notwithstanding). Furthermore, what Howard seems to propose here is merely a news service … like Voice of America or Radio Free Europe/Radio Free Liberty, both of which already have active Twitter accounts.
Howard’s proposal is careful not to cross any controversial lines, but as such, is rendered impotent. How, exactly, would one (or a dozen) Twitter account(s) that tweet links to stories about democracy be effective in combating an army of Twitter accounts targeting individuals with propaganda? It would merely leave the conversation to the bots, all of which would quickly be relegated to spam by Twitter anyway: The social media platform’s algorithms are set up to detect repeated messages, while those not automatically detected can easily be reported by users. The alternative—targeting the bots to tweet at anti-American or anti-democratic accounts—is not likely to be any more effective.
But the real question should be why Howard feels that bots—rather than, say, genuine democracy enthusiasts—are the right answer to the complex issue of democracy promotion. Anti-democratic ideals take time to take root and cannot quickly be undone by sharing a few feel-good news stories. Howard admits that anti-democracy bots are likely to have little effect on users in democratic countries. So then why does he believe the opposite to be true?
If we were serious about an online response to authoritarian sentiment, we need only look to the Peace Corps for inspiration. Former Ohio Gov. Robert Taft famously said that the Peace Corps “breaks down the stereotypes and turns an American into a fellow human being.” That process occurs through lengthy on-the-ground engagement and hard collaborative work.
Replacing Peace Corps volunteers with American flag-clad robots could never have the same effect. Any meaningful engagement with online communities abroad needs to think along the lines of what allows those connections to change hearts and minds, so to speak. We may not be able to effectively counter the armies of bots proliferating on social networks, but by engaging individuals in real and sometimes difficult conversations about the meaning of democracy, perhaps we can render them unimportant.