The following is adapted from Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity, by Samuel Woolley, just published by Yale University Press. Copyright © 2023 Yale University Press. Reprinted by permission of Yale University Press.
If you met D.W., a small business owner living in a city in the United Kingdom, you wouldn’t think he masterminded sophisticated computational propaganda campaigns out of his small apartment’s living room. But he runs hordes of bot and sockpuppet accounts across several social media platforms, making him what I call an “automated political influencer.” He’s just one of many regular folks who hold down a range of normal-seeming day jobs while running fairly complex influence campaigns on their own time.
Most of the automated political influencers I’ve met do their work without any direction or support from a political campaign or government. They do it simply because they want to—because they believe in the causes, politics, and viewpoints they are spreading, and they want to give them a wider currency. Of course, there are some individuals that receive some government support for this type of work; for example, citizens in Venezuela who spread pro-government Twitter propaganda receive small payments or vouchers from the state. But this is not the case for the automated inﬂuencers working across the world to achieve their own offbeat political goals.
The people I am concerned with represent a phenomenon fairly singular to the present day: For the ﬁrst time, technology allows everyday people to run propaganda operations at scale without much coding knowledge or ﬁnancial outlay. Easily available consumer-level technologies—the internet, various automation tools like “If This, Then That” (IFTTT), and increasingly approachable coding languages like Python (which my interviewees regularly mentioned as their go-to language for bot creation)—now allow propaganda to be created and spread by the mom next door, your dad’s ﬁshing buddy, or that nice clerk at the hardware store. These individual propagandists range in age, from teenagers to retirees. The only prerequisites are that they have access to a computer, an interest in politics, and a rudimentary education (they are most often self-taught) in social media marketing.
D.W. told me that he used both social media bots, sockpuppet accounts, and other forms of bots for a variety of purposes. He viewed his use of online automation as a form of political dissent, as activism. He ﬁrst started “playing around” with bots when he became unemployed. In the U.K., if you are unemployed, you can receive a job seeker’s allowance, but you must be actively applying for jobs. D.W. wrote a piece of code to automate job applications for him so that he could get his stipend. According to him, the terms and conditions of the government website did not prohibit automating the process, and to him this was a form of protest—an exercise that “expose[d] the horrible bureaucracy that people get caught up in for being poor.” Later, he began building social media bots to spread messages of support for Jeremy Corbin and the U.K. Labour Party. D.W. experimented with diﬀerent personalities for these political bots: some used humor, some were sincere, others were sarcastic or passive. His goal, he said, was to ﬁgure out which one got people to actually engage. He was after behavioral change.
D.W. does most of what he does for political reasons, because he is individually invested in politics. Other individual automated political inﬂuencers work for money. These inﬂuencers are not being paid in the same way as the villagers in Venezuela—getting cents on the dollar from the government for tweets. Many of the automated political inﬂuencers I spoke to in this category saw their bot-building as a freelance business. Many oﬀered their services via websites like Fiverr, an Israeli-based online marketplace where people bid rock-bottom prices for online services; this subset of inﬂuencers sold or rented social media bots for small amounts of money.
Other freelance bot builders used social media bots to bring in ad revenue by driving attention to their own social media proﬁles or websites. Still others had large, sophisticated operations, renting out entire social media botnets to anyone with money. They are generally mercenary in their approach, working for groups with a range of political leanings—as long as they pay. A teammate of mine at Oxford spoke to the proprietor of one such operation in Poland, who said he maintained over 40,000 unique online identities that he rented out for “guerilla marketing” purposes to various political and commercial entities. His accounts—cyborg accounts that used both automation and human oversight—could be launched to talk up particular products or companies in a way that seemed organic. As he openly admitted during the interview, he had also rented his accounts out for political purposes during national and European Union–wide electoral campaigns.
Most of these individual digital propagandists’ bot accounts take advantage of online anonymity (although some—especially those seeking ad revenue—did automate social media proﬁles that used their real names). They all depend to some degree on bots, sockpuppets, or other automated features of the web, including trending and recommendation algorithms. And deﬁnitionally, they all engaged in at least some inherently political work. The inﬂuencers I sought out and spoke to are not those who use bots simply to sell regular commercial products (though some do that too): the people I am talking about here are all using digital tools to sell political ideas.
There has been a recent shift in how automated political inﬂuencers (and even organic, nonautomated political inﬂuencers) operate. Political campaigns and other elite actor groups have taken notice of inﬂuencers who are interested in politics—both celebrity and small-scale. Members of my research team and I documented our research in Wired into the rise of the latter group—what we called the partisan nanoinﬂuencer in politics.
These nanoinﬂuencers, who generally have a following of 5,000 or fewer, are now being recruited and paid by political campaigns and other groups to spread particular types of content during elections. According to the digital political marketers we have spoken to, the logic is that these regular people have more intimate, local connections with their followers, and because of this their messages hit harder. (This is the same principle I’ve seen used for WhatsApp propaganda campaigns: that we are more likely to believe and act on information we hear from a trusted source with whom we feel a personal connection.) According to our own experience and that of other researchers, it’s likely that some of these inﬂuencers—both celebrity inﬂuencers and “regular” nanoinﬂuencers—use social bots to achieve their goals.
In many ways, the use of paid influencers is a logical step for propagandists working online. Human-run accounts are, after all, more difficult for platforms to justifiably delete than bots or sockpuppets. It can be hard, put another way, for entities like Meta, Twitter, and Alphabet to judge such activity as coordinated inauthentic behavior, which would be grounds for suspension or deletion.
Slate receives a commission when you purchase items using the links on this page. Thank you for your support.
Nevertheless, it’s crucial for platforms and governments to consider how to regulate this activity. What happens when hordes of paid (or unpaid, for that matter) influencers systematically spread misleading content about how, when, or where to vote? When they systematically harass particular candidates, journalists, or demographic groups? If they are working on behalf of a political campaign, what happens if they don’t disclose this connection? These questions must be considered and addressed via clear, democratic, and technologically feasible policies. Also, crucially, people, and especially young people, around the world must be educated about the rise of influencer astroturfing. The more aware of the problem of online manipulation we become, the less likely we are to be swayed by it. Candidly—after years of watching tech companies and policy makers spin their wheels in the face of these issues—it is more clear to me than ever that ground-up, grassroots, efforts will provide the most effective solutions.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.