Who Will Win the Reality Game?

A propaganda expert’s lessons from the 2016 presidential campaign.

A repeating pattern of blue Twitter birds, each with a wind-up key and a helmet.
Illustration by Slate

In April 2016, on the occasion of the New York state mixed presidential primary, I found myself in the basement of an electronics store in Chelsea, doing research on how both Republicans and Democrats were leveraging digital media tools in their drive for the nomination. I’d gone there to attend a Meetup event called “Data Driven Marketing: Lessons Learned From 2016 Elections.” The company hosting the gathering billed itself as Ted Cruz’s digital marketing team. It was called Cambridge Analytica.

There weren’t more than a dozen people in attendance.

Representatives of Cambridge Analytica told the room that they had individual data sets on 230 million people in the United States. With a combination of consumer and lifestyle data, they said, they had created individual profiles “from scratch” and could use this information to persuade “key audience members to measurably change their behavior.” Demographic politics, they claimed, were dead. Big data combined with psychological manipulation was the path forward.  That night in the basement in 2016, I was flabbergasted by their sheer gall and hubris in publicly discussing their technophilic goal of manipulating public opinion on a grand scale.

Two years later, the rest of the world saw what I’d seen. The scandal that began in March 2018 tarnished Facebook’s reputation and led to the shuttering of Cambridge Analytica and its parent company, SCL. Cambridge Analytica crossed numerous ethical lines in gathering that data, facilitated (knowingly or unknowingly) by Facebook, and the story changed many people’s minds about the impact of social media.

One thing lost in all the ensuing drama was this: The things Cambridge Analytica did with all that information really weren’t that revolutionary. In the end, it was just a marketing company, pitching its wares to a few people in a basement.

My academic work over the past 10 years or so has been on the impact of what I call computational propaganda—the use of automation to influence political behavior. My research colleagues and I were particularly interested in social media bots—which, we concluded, could be used to mimic real people, and thus amplify certain conversations and candidates while suppressing others.

But the reality, we’ve learned, is that propagandists are pragmatists. They tend to use the cheapest and most widely available tools. Most of the political bot campaigns we studied in those years were powered by “dumb bots”: simple, repetitive, automated profiles, which liked or shared certain political content from particular people again and again. They smeared opposition groups with identical troll messages or false information on voting; they took over the hashtags their rivals were using to communicate and organize, and filled them with noise and spam.

The digital political strategists I spoke to laughed when I talked about the efficacy of Cambridge Analytica’s methods. They dismissed the company’s psycho-graphic politics as a marketing gimmick, a vast exaggeration of what its technology was really capable of. Scholars have also noted a conspicuous lack of empirical evidence regarding the effectiveness of the company’s purported methods. Cambridge Analytica didn’t control the minds of 230 million Americans individually. Mostly, it was actually doing demographic political advertising, just online. The 2016 U.S. election wasn’t a case of “smart” systems or machine learning being leveraged to conquer democracy. It was a case of simple, mundane, tools like dumb social bot armies spamming out content to confuse and anger people.

But it’s not going to stay that way.

During the 2018 U.S. midterms, things shifted, albeit slightly in most experts’ opinions. My colleagues and I found more instances of minority groups and issue voters being targeted with politically motivated trolling and disinformation online. We concluded that this targeting was of ambiguous value to campaigns—it’s not clear how many votes it might’ve flipped—but that people were experiencing very real emotional and psychological harm.

Deepfakes, or A.I.-doctored videos, were also on most peoples’ radar in 2018—but not a lot came of them. In December of that year, Tim Hwang, the noted A.I. scholar, wrote in Undark that “[deepfake] technology is still far from being able to generate compelling, believable narratives and credible contextual information on its own.”

A lot seems different in 2020. While technology firms and civil society groups have ramped up responses to the threats of computational propaganda, we are far from having conquered or even corralled the problems at hand. There are already indications that A.I. is more effectively beginning to enter into political communication. In my new book, The Reality Game, I explore a variety of these innovations: cases of deepfakes being used for attempted political manipulation, individualized online ad targeting à la Cambridge Analytica, the deployment of VR for the purposes of multisensory indoctrination, and A.I. voice systems constructed to sound just like real people.

For the most part, though, these tools are far from market-ready. The majority of technology used to spread political information online in 2020 will continue to be simple. Armies of dumb bots still exist and are still being used to barrage people and algorithms alike. Facebook recently announced that it will ban deepfakes, but it doesn’t plan to do anything about altered videos made using more available technology (sometimes known as cheapfakes). As it becomes cheaper, smarter technology is being used to supplement these more basic tools, but there is certainly still time for society to get in front of, and to regulate, A.I. as a mechanism for underhanded political communication.

There are steps we can take now. Tech firms must begin to design their tools with human rights in mind. In may be too late for Facebook to fully recover its reputation—but the next iterations of social media from chat apps to extended reality tools, some already owned by Facebook, can be built in order to promote both privacy and free speech. And Congress can pass laws to enforce transparency in political advertising, as well as protect people’s data online.

There has been little progress toward these common-sense changes, but it doesn’t have to stay that way. Silicon Valley companies have consistently fought regulations, but those arguments ring hollow when their own ethical policies are so lax. If users start logging off, the companies will have to start to change. We all have a role to play in creating the information environment we want.

We were blindsided in 2016. But now we know better. If we don’t take action to control these technologies, then they will control us.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.