We know for sure that Cesar Sayoc, who allegedly targeted high-profile Democrats with mail bombs in late October, isn’t a Russian bot. His mugshot proves that he’s a flesh-and-blood Florida human rather than a computer program. But if you had read the postings from his now-suspended Twitter account, it wasn’t so easy to tell. You might have concluded that Sayoc was, in fact, a robot. And that’s a problem.
We’ve been under assault by fake internet personae since well before Trump threw his hat into the presidential ring. However, it used to be easy to tell the difference between an ordinary person on the internet and a machine-operated facsimile. Robots simply weren’t very good at imitating human speech and communication patterns. With the chatbots of yesteryear like Eliza, you had to be exceptionally stupid or exceptionally horny to think a virtual human was real. Now, it’s not so easy—not just because bots are getting better at imitating humans, but because humans are increasingly acting like bots.
Even in the Stone Age of virtual personae, there was no surefire way to tell when an internet person was actually a robot—but there are often dead giveaways. One of the most reliable is that humans have to sleep and bots don’t. If you find an account that’s tweeting 24/7, you can be certain that it’s a robot, or at least a cyborg human that allows a bot to post while he’s unconscious.
Luckily, Twitter makes it relatively simple to download tweets to catch people sleeping or to do other analysis. All you need to do is write a computer program that hooks up to Twitter’s data spigot—technically, the Twitter API—and you can play around to your heart’s content. Over the past few months, I’ve been doing a little snooping as time allowed and found some interesting behavior.
In the plots below, the time of a person’s tweets is plotted over a 24-hour clock (midnight at top, noon at bottom)—and the further away you are from the center, the more often someone tweets at that particular time. Take, for example, @hankarnold54, a “Nationalist Veteran Patriot” who wants to take back America. If you plot the time of the account’s tweets on a 24-hour clock (midnight at top, noon at bottom), you see that it never seems to sleep, and its predilection for posting on the half-hour makes a sunburst pattern. I’m very comfortable saying this is a bot. (I tweeted to ask but have received no response, even as the account continued to post right-wing news.)
Compare that to a typical humanoid—such as me. Below, you can see a seven-to eight-hour period when I stop my online activity, and you can tell that my sleep pattern is pretty normal.
Sayoc’s sleep pattern was apparently not ordinary; the carve-out in his daily clock is quite short and in the wrong place. It looks like Sayoc wasn’t getting much sleep, and when he did, it was in the middle of the day.
And, on the other hand, a bot can pretend to sleep, and a lot of bots, in fact, have a diurnal pattern. In many cases, they seem more natural than Sayoc’s.
So even sleep patterns are not a foolproof method of distinguishing human from robot; sometimes the robots seem more human than the actual humans. This is true in other ways as well.
For example, most people don’t spend most of their time sending out a rapid-fire series of tweets, pausing only a second or two between messages. Here’s a graph of the time I tend to pause between tweets; the x-axis represents the time in seconds, while the y-axis represents how often a particular-sized pause happens.
Now look at this Twitter user, which I would be very confident calling a bot even if it didn’t say so in its profile:
Notice how common it is for bots to have really short periods between tweets—fewer than eight seconds, four, or even two.
Humans may not be tweeting quite that close together, but some of us are catching up. Cesar Sayoc also did a lot of rapid-fire bursts:
He’s far from atypical; indeed, his posting behavior is more human by this measure than many other people out there. In the past few weeks, Sayoc was only posting at a rate of 10 to 15 tweets per day. It’s not uncommon to see real humans tweeting at a blistering pace of 50, 100, or even more tweets per day—something that would be unremarkable in a bot, but seems rather incredible for somebody who has to hold down a job. Anyone who can keep up this kind of pace, day in, day out, is arguably behaving more like a machine than like a flesh-and-blood organism.
Humans imitating the behavior of bots is even clearer, and more alarming, when it comes to the content and nature of the communications. Misinformation and propaganda are spread by humans just as they’re spread by robots. There’s evidence that bots have been disproportionately responsible for spreading fake news. But humans are also acting as very efficient misinformation-spreading machines. Over the summer, I monitored a few fake news websites and noticed that the stories that really took off in the Twittersphere typically did so because a real person—often a verified Twitter account—started the game of telephone and did most of the amplification, with bots helping out in the background. For example, one large thread promoting the ludicrous claim that “joining Antifa is now illegal—punishable by up to 15 years in prison” could be traced back to internet personality (and alleged failed Robert Mueller smear plotter) Jacob Wohl, who in turn got it from a well-known fake news site.
And delving deeper into the content of people’s social media postings shows an even more fundamental way in which human behavior and bot behavior are converging. Twitter and Facebook and other social media sites are designed to make it easy to communicate with others with a minimum of effort—press a button to like or retweet, and you’re done. Even when people choose to express themselves online more fully, they often do so with a string of emoticons and hashtags and links and references—a simple string of symbols that doesn’t necessarily require the higher-level grammar and syntax of ordinary human speech and writing. And it is precisely that grammar and syntax that chatbots of old had a hard time with. Dispense with them, and we’re speaking a much more machine-friendly language.
In other words, it’s not that the bots are getting so much more sophisticated at imitating human communications. It’s that, thanks to social media, human communications are changing in a way that makes us easier to mimic. Computers have finally started passing the Turing test for reasons that Alan Turing himself would never have imagined: We’re becoming more botlike ourselves.