The Industry

Who Amplified Q?

Supporters of the QAnon conspiracy theory are now showing up at Trump rallies and in the Oval Office. On Twitter, they’ve been getting a mysterious boost for months.

A man wearing a QAnon T-shirt stands amid the crowd at a Trump rally.
The “Make America Great Again” rally in Tampa, Florida, on July 31. Joe Raedle/Getty Images

The most notable thing about the QAnon conspiracy theory might be how close it keeps treading into the orbit of the man it is about: Donald Trump. Supporters of the QAnon conspiracy theory were all over a Trump campaign rally last month in Wilkes-Barre, Pennsylvania, sporting T-shirts and signs and apparently unworried that anyone consuming coverage of the event would know they believe in a revisionist version of current events—narrated by “Q,” a mysterious online figure claiming to be a high-clearance government official—in which special counsel Robert Mueller is in fact working with the president to arrest a vast conspiracy (and pedophile ring) of global elites. If their presence at the rally seemed strange, however, it was nothing next to the appearance in the Oval Office two weeks ago of one of QAnon’s biggest promoters, radio host and YouTuber Lionel Lebron, posing for a photo with President Trump.

Q has been dropping clues, which he or she or they call “breadcrumbs,” for followers of the conspiracy theory to puzzle together via the /pol/ (for “politically incorrect”) channel on 4chan, the anything-goes message board where many of the most insidious far-right memes and misdeeds are hatched. These breadcrumbs are now feeding what looks like a not-exactly-minuscule audience of Americans: One of the larger Facebook pages has more than 23,000 followers, while comedian Roseanne Barr, known for her affinity for fringe-right thinking, has tweeted about Q at least four times. A recent Q post alleged that the late Sen. John McCain actually took his own life to avoid a trial by military tribunal.

Q has laid out this narrative over several months, posting curious missives that often read like incomplete thoughts. “Priority to clean out the bad actors to unite people behind the America First agenda. Many in our govt worship Satan,” reads one of Q’s stanzas. While some portion of the expanding universe of Q followers, who call themselves QAnons, have been wading through the difficult-to-navigate world of 4chan as well as 8chan, an even shadier message board where Q has been spotted, the bulk of them seem to be learning about Q through the internet most of us occupy—in YouTube videos, in Facebook groups, and from Twitter accounts dedicated to republishing and analyzing new breadcrumbs from Q and highlighting details in the news that they claim support Q’s legitimacy.

What explains the migration and growth of the Q phenomenon? For one thing, perhaps the desire of this group of Trump supporters to live in a reality in which an unpopular president mired in self-made controversies is actually winning. And thanks to a report last month from NBC News, we now know that the theory can be traced back to two 4chan moderators and one YouTube personality who worked with each other in order to port the Q posts onto more mainstream channels. Their effort does not appear to be the only coordinated attempt to boost Q to a bigger audience. Q’s journey from the cobwebby corners of 4chan into the mainstream likely wasn’t entirely organic. Rather, it appears to have been amplified along the way by automated Twitter accounts—that is, bots. And they seem to have gotten their start very early in the life of the conspiracy theory.

Using the tools and Bot Sentinel, combined with data pulled and analyzed by Slate about how many tweets each account sent in a single day, I was able to determine whether many of the accounts tweeting about QAnon displayed behavior that’s indicative of automated activity. One of the surest signs of bot activity is volume, according to Philip Howard, an Oxford University professor and director of the Oxford Internet Institute, where he studies automation in social media. If an account tweets more than 50 times a day about politics or an election, that usually denotes some level of automated activity, says Howard. and Bot Sentinel—both of which were created by independent researchers—use machine learning to look for botlike activity, which, according to, includes tweeting extremely frequently, gaining a large following in a short amount of time, retweeting other bots, being followed by other bots, and sticking to highly polarizing political messages. (The creators of Bot Sentinel and report that their tools are able to identify bot accounts with about 94 percent accuracy.)

Our analysis revealed a pattern that began very soon after Q started posting. The first breadcrumb on 4chan, which was posted on Oct. 28, alleged that an “HRC extradition” had begun the day before and the effort was being coordinated with multiple countries. The author, who wasn’t yet going by Q, implored readers to keep a close watch on longtime Clinton aide Huma Abedin. On Nov. 4, a week after the first Q post surfaced on 4chan, Q appears to have posted a call to action to followers to arrange the crumbs into a single graphic in order to “collect and post to spread the information appropriately.” The same day, Q’s posts started appearing on Pastebin, a text-only storage site for sharing. Until then, most tweets about Q with the #QAnon hashtag were from sharing a YouTube video or a few questions from early followers tweeted here and there. But on Nov. 5, the number of tweets with the #QAnon hashtag jumped from around three a day to about 20, with at least three accounts that appear to be either partially or entirely automated accounting for about half of those tweets. (The analysis was done this summer, months after the tweets were first sent, so some tweets could have been deleted and some accounts suspended or deleted.)

A QAnon-related tweet.

By Nov. 6, there were at least seven more accounts that exhibited botlike or highly automated behavior tweeting about QAnon, according to our analysis. Some also used the #Pizzagate hashtag and others included #TheStorm, an allusion to the countercoup that’s a main subject of Q’s messages. While it’s hard to know if this effort was coordinated, either one individual or several individuals appear to have programed numerous bots to help seed the conspiracy theory on Twitter.

A QAnon-related tweet.

Researchers who study the political activity of bots on Twitter say that automation has been in the arsenal of people trying to propagate conspiratorial news beyond their community of fringe adherents since before the 2016 election. “In the wake of the 2016 Pizzagate disinformation narrative, the use of coordinated amplification, automated or semi-automated, has been used more often than not to push these stories into the mainstream,” said Benjamin T. Decker, a research fellow at the Shorenstein Center at Harvard University. “With an understanding of platform algorithms, automation can be used to impersonate organic social engagement, in turn tricking users into believing a claim to be more popular than it actually is.” Twitter has been particularly susceptible to this kind of activity. Since the platform allows for account automation, it’s relatively easy to buy puppet Twitter accounts that someone hoping to seed a conspiracy theory can fire up to send a hashtag or news topic to the site’s trending topics. Whichever person or people decided it was time to amplify Q appears to have used this playbook. (Twitter announced new caps on heavy tweeting that will impact those who use Twitter’s developer tools to automate accounts, but those rate limits don’t go into effect until later this month.)

As the Q conspiracy continued to snowball and collect followers, the amount of automated activity on Twitter about Q increased as well. A Twitter search for the hashtag #QAnon on Nov. 15 revealed more than 400 tweets about the conspiracy with the hashtag on that day alone—a huge jump from earlier in the month, despite the fact that the bizarre conspiracy had not garnered the attention of the mainstream media at that point. Two of the likely bot accounts each tweeted with #QAnon more than 50 times on Nov. 15. At least 15 of the accounts tweeting with the hashtag that day displayed strong signs of automation, and none of those 15 were the same as the accounts I found that were tweeting at the start of the month.

Fast forward to March 31, 2018—the day Barr tweeted about Q for the first time, when the conspiracy theory began to attract its first major wave of headlines. That day, thousands of tweets surfaced about the conspiracy theory, largely thanks to dozens of accounts that showed strong signs of automated activity. Journalists and real Q researchers were there too, but a look at the Twitter activity from that day shows an overwhelming deluge of accounts exhibiting signs of automation using the hashtag #QAnon. These days, #QAnon has become such a popular theme among bots and troll accounts that it has become a regular trending hashtag among the 600 bot and troll accounts monitored by the Alliance for Securing Democracy that are known to have ties to Kremlin disinformation campaigns. On the day after Trump’s rally in Pennsylvania at the start of August, the network of highly likely bot accounts that Bot Sentinel monitors sent 3,490 tweets with hashtags about QAnon in one day alone.

Automation on social media can make an issue or a person appear more popular than they actually are. During the third debate between Hillary Clinton and Donald Trump in 2016, bots sharing pro-Trump-related content outnumbered pro-Clinton bots by 7 to 1, according to research from Oxford University’s Project on Computational Propaganda. When a tweet gets hundreds of likes or retweets, it can suggest a groundswell of grassroots support, even if many of those interactions come from bots. It’s also worth keeping in mind that not all accounts that display signs of automation are completely automated; sometimes a person may take back control of the account or decide to fire it up during a particularly heated political moment. A good bot-maker’s automation on Twitter can be particularly difficult to spot, especially if their timing is nuanced and their language isn’t repetitive.

And bots, obviously, aren’t the only boosters of QAnon. Whoever is behind the Q posts, there are signs that there has always been an effort to take the theory off the chans and onto more mainstream platforms. Q ostensibly didn’t get its name until Nov. 1, when the 4chan user posting the message started calling itself Q Clearance Patriot, in reference to “Q-level” security clearance. Yet on Oct. 28—the same day as the first post from the user who would soon be Q went up on 4chan—a WordPress blog titled “QAnon News” went live. The blog includes the text of Q’s 4chan posts, links to tweets from President Trump, and detailed explanations attempting to break down the meaning of Q’s posts line by line with citations. Either the person who made this blog is somehow connected to Q or knew enough about what was to come that they decided to start a blog with the same name. Whoever made this blog was a fast researcher, too—unless they had the posts first.

What does all of this tell us? We don’t know if the flood of botlike Q tweets now has anything to do with the early attempts to amplify QAnon or what, if any, kind of coordination between bot accounts and any other efforts was happening then or now. But we do know that this kind of automated account—the kind that was a menace during the 2016 campaign, thanks to a Russian disinformation effort—often tweets about issues, such as gun control and racial justice, that divide Americans and stoke sociopolitical tensions. Apparently, belief in QAnon is now one of those partisan issues—and it looks like some parties may have recognized its potential to rile up and divide Americans very, very quickly.