By some estimates, more than two-thirds of Elon Musk’s Twitter followers are bots or spam accounts. According to Botometer, a popular site for checking bot followers, the number is closer to 20 percent. Regardless of what the exact numbers are, it is undeniable that there are significant numbers of bots on Twitter.
In the past few weeks, Musk has tweeted about Twitter’s bot problem, going so far as to cite the issue as a reason for potentially pulling out of his deal to buy the platform. Whether or not you believe him—that bots are what’s threatening the deal—Samuel Woolley says that the conversation is focused on the wrong thing. Woolley, co-author of the new book titled simply Bots, argues that bots are not inherently bad—they’re just tools used to automate accounts.
On Friday’s episode of What Next: TBD, I spoke with Woolley, assistant professor in the journalism school at University of Texas-Austin, about how “bad” Twitter’s bot problem really is. Our conversation has been edited and condensed for clarity.
Lizzie O’Leary: What exactly is a bot?
Samuel Woolley: Our definition of a bot is any automated software program that is used to do tasks online that a person would otherwise have to do. There’s lots of tasks that would be so monotonous and that are also so gargantuan, that people wouldn’t even be able to do them. For example, scraping Google to find new websites, scraping through all the different data that exists on the web to try to make sure that the ranking system on Google works properly. So, automated software programs have to do those tasks.
There’s also chatbots. Anytime you are interacting with a health insurance company, or your cellphone provider, that little chatbot window pops up because it’s easier for them to do that than have you talk to a human, right?
Yeah. There’s been a lot of reporting out there that talks about bots as if we’re in this brave new world of Terminator AI, like these smart bots that are out there manipulating public opinion and getting us to vote for particular politicians by chatting with us. And that’s simply not the case. I wrote a book a while back called The Reality Game. The research I did shows that over and over and over again, based upon a lit review, the most intelligent AI system that exists as a chatbot basically has the intelligence of a 5-year-old. And 5-year-olds can be manipulative, but they’re not going to get you to change your political affiliation usually, unless they say something particularly incisive on accident.
But what can get people to change their minds is a flood of information. Like seeing the same message over and over again on social media, creating an illusion of popularity.
So, if you get 20,000 bots talking on a particular topic, or using a particular hashtag, it makes that thing look more popular than it actually is. And the quantitative metrics that get used to generate those Twitter trends or YouTube trends then say, “Oh, look. This thing’s popular.” A very sad example of that comes from the day of the Parkland shooting, when the hashtag about David Hogg being a character actor was the number one trending hashtag on YouTube. Later on researchers showed that was hugely driven by bots.
While bots get a bad name on social media because they can spread disinformation, there are also bots built to help consumers or engage in advocacy. Basically, they’re created to make things easier for people. What do some of these good bots do?
There’s bots, for instance, that have been built to help people in an automated fashion fight their parking tickets. Bots have been built that are used to amplify voices that are otherwise marginalized, bots that promote awareness about voting, or bots that are used to promote awareness about Black Lives Matter. I know a lot of people in journalism that have built bots that’ll parse larger data sets for them.
Right now, Elon Musk is claiming that Twitter hasn’t been open with him about how many bots and fake accounts exist on the platform. After he complained about it in a securities filing, the company essentially said, “Fine, you can have access to our full fire hose of data and figure it out yourself.” But perhaps ironically, you’ve noted that the reason that Twitter has an issue with bot and spam accounts is the transparent way it was built. Can you explain that?
Twitter has historically had an open API, which has meant that, yes, developers could build bots onto Twitter in a quite an easy fashion. And so that was one way a lot of bots got created there. But also, it meant that researchers could study Twitter easily. Allowing researchers, journalists, and developers onto their platform to analyze data on Twitter means that there’s tons of information out there on how many bots there are on Twitter. Facebook does not do that. YouTube does not do that. They do not allow that kind of access.
There is another question here, that’s important when you’re trying to figure out how valuable a company like Twitter is: Platforms like Twitter make money based on their number of active users. So, do bots distort those numbers?
It’s an argument I’ve been making for a really long time in my own research, that the amount of bots on your platform can dictate an increase in audience engagement, and that includes advertising engagement. So, basically it undercuts the bottom line because advertisers hate fake traffic. They want actual real eyeballs of people that are going to buy their stuff, ideally.
My perception after following this coverage for the last several weeks, and listening to Elon Musk, but also knowing what I know about Twitter, leads me to believe that Elon is using this issue of bots as a backdoor to get out of the deal. Because Twitter is arguably the industry leader in combating influence operations, disinformation, and bot operations. I think that Elon Musk is seizing upon a major critique of Twitter that would potentially undercut Twitter’s engagement and its advertising mechanism. However, among the other social media companies, Twitter’s doing pretty good on this. And we know that Twitter has done big bot purges.
I want to unpack a little bit the relationship between bots and money making. On the one hand, the business model is centered around ad engagement, and it seems like more bots would equal more engagement, but on the other hand, ad engagement comes from engagement from people who are interested in the advertisement. So how do you pull those things apart? Do bots make money?
Bots can be used to make money because the internet is a computational system—it runs on quantitative metrics, and it runs on numbers. And social media is no different in many ways. So, take a site like YouTube, the more popular you get as an influencer on YouTube and the more engagement you get on YouTube, the more money you make every month. And so, if you can figure out a way to generate automated engagement, that doesn’t get detected by the platform, you can become quite rich.
Do bots make money for Twitter?
I think it’s inarguable that bots have generated revenue for Twitter over the course of its company’s history, because Twitter has definitely had a huge bot problem. And so, the numbers of bots on the platform… Researchers have made claims that the numbers of bots on the platform are far higher than what Twitter releases to these Securities and Exchange Commission every year.
Which is about 5 percent, they say.
Yeah. And some researchers have said as many as half of the people on Twitter are bots. But I think that’s way overblown. More reasoned researchers would say it’s maybe 15 percent or 20 percent of the traffic on Twitter is bots. And we know that the majority of bots on Twitter are spam bots and are commercially motivated.
So, even if Musk is using this as a smoke screen to get out of the deal, he’s making maybe a decent point?
He’s making a decent point, it’s just that he’s coming in at a time when a lot of progress has been made and throwing out the baby with the bathwater. It’s simultaneously gratifying and really frustrating to see him saying this, because it glosses over a lot of complexity. It also glosses over a ton of research and work that’s been done in infosec, but also by a lot of other social scientists and computer scientists to identify and help combat this issue at the social media firms.
Elon Musk talks in soundbites. He talks in a way that suggests that he hasn’t done his background research. He doesn’t understand audience engagement. He doesn’t understand the issues of data security on Twitter. And the amongst the infosec crowd is that a lot of people are would be very scared if Elon bought Twitter, because it could potentially result in a lot of regression in policies that have been made, not just on bots, but on a lot of other things.
On Tuesday, Texas Attorney General Ken Paxton waded into this whole mess saying that the state was investigating the bot issue because it might hurt Texas businesses and consumers. What do you make of that?
It seems to be a blatantly political move. Ken Paxton has talked about his concerns about conservatives being silenced on social media a lot and has tried to tie Texas’s boat to Elon Musk’s purchase of Twitter in a way that riffs off of Musk’s statements about Donald Trump. But the research does not back up what Ken Paxton is saying. Scientific work that’s been done does not show that conservatives are being silenced on Twitter or Facebook. In fact, there’s a quite a bit of good scientific research out there that shows that some conservative voices are unduly amplified on these sites.
Does this actually mean anything for the Twitter deal?
I think that Paxton’s move is definitely glomming on to the headline grabbing that Elon Musk has done. What Elon has done, it seems like, is used the purchase of Twitter to talk about a lot of political issues that have gotten him a lot of attention. And Ken Paxton is no fool. He understands that by proposing these laws and by talking about these things that he’s appealing to his base. He’s appealing to the anger that has really been directed specifically at Twitter after the deletion of Donald Trump’s Twitter account.
Our own social media timelines, particularly Twitter, are so particular to us, and I think sometimes we forget that other people’s look totally different. And yet we all think that there is some singular shared reality called “Twitter.” I wonder how possible it is to make policy, whether it’s about bots or spam accounts, when our realities and our followers and the makeup of who they are so different.
There’s a lot of different subsections of Twitter that exist not only within the United States, but internationally. Twitter occurs in many languages other than English. And their experience of Twitter looks way different than the experience of Elon Musk or Ken Paxton or you, or me. And that means that laws and policies that get made that police certain modes of communication on Twitter need to be socially and culturally contextual. They need to think about how different types of people in different places use Twitter.
There are things that can be done that are universally beneficial to users, however. Deleting blatant lies about how, when, or where to vote, for example. But that doesn’t mean that there’s not a huge need for nuance, not only in external policy, but also in Twitter’s internal policy. And frankly, we haven’t seen the nuance that I would hope for.
Does all the heat on this issue mean that Twitter needs to put in a little more effort?
If we look at Twitter—its size, and the money that it has comparatively, when set alongside Facebook or YouTube—Twitter has done way more. The better question would be, could Twitter benefit from more money being put on this, and where could they get it? Because the answer is, “yes.” And then, the answer is “big question mark.” If Elon Musk had been saying, “I want to buy Twitter. And I’m going to put hundreds of millions of dollars towards the content moderation and bot moderation and spam moderation we’re talking about,” people like me would be really excited because Twitter needs that.