What if we told you that fighting ISIS could be done cheaply, relatively easily, in a manner that would not escalate or put any in harm’s way? You might think that’s too good to be true. Except it’s not. One thing we know about ISIS is that recruiting is crucial to its successes and maintenance of its ranks. This recruiting, we are told, is sophisticated, utilizes both propaganda and direct interaction with individuals, and uses a variety of social media platforms. It can require thousands of hours of work, in the form of instant messaging, tweets, and Skype calls. In short, it is very time-intensive. It’s well established that time (and expertise) is not unlimited—but this supposition is only true if one is a human. Enter the next stage in undermining and fighting ISIS: artificial intelligence.
One way of undermining any group’s ability to fight is to frustrate its ability to gain new fighters and supporters. Another is to inject “noise” or disruption into the information systems of a group’s decision makers so they are unable to achieve their goals. These tactics are not new, but the means by which we propose to pursue them are. How might we undermine ISIS’s ability to refill the ranks and disrupt their OODA loop (Observe-Orient-Decide-Act)? Let’s utilize our technological capacity to create chatterbots to engage recruiters’ time and attention.
A chatbot is an artificial conversational entity: basically, an A.I. that talks with people. The chats can take the form of written text or even voice. There are many chatbot technologies available now, but an ISIS recruit bot would be more complicated than something like Elbot. An ISIS recruit chatbot would need to be sophisticated enough to trick an ISIS recruiter—a person with limited resources—into believing that the entity on the other end is real. Anything less and the recruiter will not waste time and bandwidth chasing digital deceivers. Moreover, this chatbot would need to be paired with machine learning in two ways. First, the bot would need to patrol various social media sites to look for potential recruiters and pose as a target. Second, it would have to be able to adapt to language and content changes during conversation to “speak” with an ISIS recruiter. We might also want to give it the ability to “realize” from language if the recruiter was trying to evade it in some way.
This second challenge—the linguistic one—is the more difficult. Sophisticated chatbots would require high-level understanding of the language actually used by these groups in social media and online forums. As ISIS has members from across the globe, bots would have to be able to speak in diverse dialects. From the point of view of recruiting Westerners, the bots must be able to speak many of the common Western languages and dialects, like American English and British English, or various German, French or Spanish dialects. But most importantly, to recruit those who speak Arabic, the bot will at least have to be familiar with the Arabic dialects used by ISIS forces as well as potential recruits. This is a challenge, given the enormous dialectal diversity of colloquial Arabic. Moreover, one may also want to include some of the more than 2,100 languages and dialects of Africa—for example, Hausa, which is used by Boko Haram.
We can think of the problem of dialect consistency as the “water-fountain problem.” If you want to recruit people from Wisconsin by claiming that you too are from Wisconsin, you cannot make the mistake of calling a particular free water-providing object a “water fountain”; instead, call it a “bubbler.” In short, we need chatbots that can “speak” in the different idioms and dialects of a range of social classes, educational levels, and sophistication, from a variety of places.
This may all sound too hard, given the variety of languages at hand, but it need not be so. While the chatbots and language learning should be continually improved, even early, unsophisticated bots could still waste enough time and generate noise to make a difference. Limited interaction with an ISIS recruiter, perhaps just a few exchanges, could still be enough to force ISIS operatives to commit some time and effort to sorting valid inquiries from false ones. These chatbots don’t need to be sophisticated enough to pass a Turing test, just good enough to get the ISIS operative to respond.
Additionally, unsophisticated chatbots may actually mirror some of the local or regional potential recruits, as many may have had limited schooling in Arabic or learned heritage Arabic at home. Due to the variance and unpredictability of these more colloquial speakers, ISIS recruiters have likely already realized that many potential target recruits may use poorly formed Arabic. Thus, an ostensibly illiterate chatbot might still look like a potential recruit.
This proposal would require no new major technological innovations, nor major investments of personnel or time above the creation of the A.I.s. It’s also highly ethical, as it could disrupt ISIS’s operations using targeted, nonlethal means that will not affect other networks, unlike cyberoperations using viruses or other malware. Indeed, the “harms” fall completely to ISIS—wasted time, lower recruiting rates—and what is more, would be completely self-inflicted by its own inability or unwillingness to recruit openly.
As with any idea, there are potential risks and drawbacks. For instance, we know that some of ISIS’s recruiting takes place on Facebook or in public forums with innocent people who have no interest in joining ISIS. Nonetheless, ISIS recruiters monitor these sites to find potential new members. Indiscriminate use of chatterbots in these forums could disrupt these communities by manipulating the beliefs of innocent members, many of whom are already marginalized individuals.
A second concern might be that this option should be best undertaken by the U.S. government and not proposed openly in Slate. This is part of what’s interesting about this proposal: Its success or failure doesn’t depend on whether ISIS knows that the U.S. government, or any other potential actor, is doing it. If the chatbots are bad at “speaking,” then the ISIS recruiters would ignore them regardless of whether they know they are conversing with an A.I., just as most of us easily ignore the obviously machine-generated comments or responses to emails, blogs, or articles. If instead the chatbots do “speak” approximately like people with appropriate responses, “understanding” of dialects, and so on, then even if ISIS knows that some potential “recruits” are simply A.I.s, they will still have to follow up on those leads or else give up altogether on recruiting in those venues.
Of course, ISIS could change its recruiting tactics, but even these steps would all be victories for the rest of us. ISIS may stop using open forums and message boards, which could protect or enrich those communities for legitimate members. ISIS might start to introduce “key words,” both to signal to potential recruits and have potential recruits signal to them, in which case the chatterbots just add to the noise. But this overlooks the fact that the chatterbots would adapt (if the keywords were provided openly) or the pool of possible recruits would shrink (if it were hard to find the keywords). The overall cost-benefit to ISIS may actually lead it to simply stop recruiting online, which would be a major win for the rest of us because this is one of its main sources of new members.
Even if the chatbots are not that well-trained to start, it takes only a matter of days for an A.I. to learn rapidly. The training data acquired by the chatterbot during this time would be almost priceless. More importantly, the bots can be ubiquitous. They do not need to sleep, they do not require insurance or overhead, and no one is placed in danger if they are “found out.” ISIS cannot put the addresses of the chatbots or their family or friends online.
ISIS is one of the most sophisticated radical groups in its use of social media and online forums for recruiting, propaganda, and advertising. These tools have enabled it to recruit from anywhere on the globe and with great speed and effectiveness. However, with linguistically sophisticated chatbots imitating recruits and injecting noise into the recruiting machine, there just might be a chance to turn the rising tide of recruits into slow-moving sludge.
*Correction: Oct. 21, 2015: Due to a production error, this article originally left off the byline of Heather M. Roff. Her byline has been added.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.