Two new reports prepared for the Senate Intelligence Committee offer the fullest picture yet of how the Russian disinformation campaign to influence the 2016 U.S. election actually worked. Delving into a trove of social media posts and ads attributed to a Russian troll farm, researchers uncovered activities both surreal (there were ads selling sex toys, for some reason) and maddening, including efforts to suppress voting among black Americans.
The group behind one of the reports was the firm New Knowledge, whose director of research, Renée DiResta, joined Slate’s tech podcast If Then this week to discuss the findings. DiResta is also the head of policy at the nonprofit Data for Democracy as well as a Mozilla fellow in media misinformation and trust. She regularly writes and speaks about the roles that tech platforms and curatorial algorithms play in the proliferation of disinformation and conspiracy theories. We discussed how her group of researchers undertook their campaign postmortem; why Americans and the social networks they love were so easy to exploit; and what Congress, the tech companies, and U.S. voters have to do next. The interview has been edited and condensed for clarity.
Read or listen to our conversation below, or get the show via Apple Podcasts, Overcast, Spotify, Stitcher, or Google Play.
April Glaser: Some of the key findings of your report are that social media companies underreported or weren’t quite forthright with the information that they’ve slowly leaked in dribs and drabs to researchers since 2016, and that some of the platforms that are owned by the large companies, namely Instagram, owned by Facebook, and YouTube, owned by Google, played a much larger role than really had been made out in congressional hearings and in investigations and in the media. It seems like these two things may be connected in some way, in that investigators, researchers, Congress, and the press weren’t getting as much information from these companies as they should have—and that we also weren’t really looking into the role of some of their subsidiaries.
Renée DiResta: I can probably put some of that in context, if you’ll let me go back a little bit. Following the 2016 election, there were the beginnings of an understanding that perhaps the Russian operation on Twitter, which was sort of known about, had actually extended to other social platforms as well. And so there were outside researchers—myself, a collection of other people, Jonathan Albright was one the leading figures in this. He was one of the members of my team. We were kind of scouring the internet looking for evidence of this. At the same time, there were some amazing journalists who were doing the same thing. One of them, I think maybe the Daily Beast, wrote an article about how what appeared to be Russian trolls were running the largest Texas secessionist page on Facebook. I think shortly thereafter there was another investigation into Black Matters US.
And then funny enough, in the Russian press as well, there started to be this emergence of these leaks of people who had actually worked for the troll factory, saying, Yeah, we totally took on the American election, and here’s how we did it. Those of us who were trying to find these breadcrumbs to get a sense of the scope of the operation—I was also at the time beginning to communicate with Sen. Warner and some of the other senators about what became increasingly obvious was, like, a structural problem with social networks.
I met up with Tristan Harris, who of course talks about the impact on people, individuals, and their experience of social networks; and Guillaume Chaslot, who was one of the people who created the YouTube recommendation engine; Sandy Parakilas, who’d been at Facebook and kind of came out as a whistleblower, saying they didn’t necessarily always do the best job with their data. So there were these two parallel threads. One was disinformation research. The other was sort of social network accountability for what increasingly seemed like misinformation, polarization, radicalization. Those threads, we were really kind of pulling on both of them in 2017, and as the evidence of Russian interference became more and more obvious, we began to say, Hey, maybe we should have some hearings on this.
Glaser: One thing that really stuck out in your report was that we’re not just talking about memes and activist groups and DMs here, we’re talking about very, very detailed operations to really come off like faux activist or advocacy groups. I mean, you talked about in the report how there were e-commerce sites selling things like sex toys, the recruitment of Americans to work with Russian agents through job listings, free self-defense classes, the solicitation of photos for a calendar of women, offering counseling to followers on a page called Army of Jesus for people who were struggling with porn addiction. I’m curious why they went through so much work to appear authentic here. Was this level of posturing necessary for the Russian operation? Because it’s just fascinating how detailed it was.
DiResta: I think it’s actually the detail that makes it popular. It’s maybe a chicken-egg problem. We don’t have a ton of insight into how many people converted to follow the page and when, but if you think about your experience on social media, you engage with what is effectively like a media brand. A lot of these pages were really masquerading as media brands. They had a website, they had merch, they had a podcast—some of them had podcasts. They had all of the things that you would think of when you think about oh, I like this media property, it’s independent media. They were very adamant about that. “We didn’t trust the media, so we became it” was the tagline for Black Matters, even though it’s kind of grammatically wonky. They had a huge banner announcing the launch. They had banners announcing “We hit 100,000 subscribers on our site.”
It was really growing an ecosystem, and in order to do that, authenticity really wins the day. People follow brands because they feel an emotional resonance, a connection. The level of detail, the way you communicate, that stuff does matter, and so if you think about it, as a social media marketing agency building up small brands, that’s effectively what was happening here.
Will Oremus: Some of the revelations in these reports have sparked a backlash from the NAACP, which is actually calling for a boycott of Facebook, on the grounds that a lot of this Russian disinformation had targeted the black community in particular. Can you talk about why that might have been and what effect it may have had?
DiResta: It’s important to understand that the topics that they picked were based on real pre-existing social divisions. They didn’t create rifts, they exploited them. There is no way to deny the deep struggles and challenges that America has had with race for decades, including, as this operation was taking place from 2014 or so ’til now, things like the Black Lives Matter movement trying to achieve change, the social debates that that has led to, the uncomfortable feelings that that has brought up. So as this happens, they are able to latch onto that and to increase the feelings of alienation and grievances among people who have legitimate feelings of alienation and grievances, but to kind of double down on it.
If you look at the far-right content, you see a lot of, there’s pre-existing rage, and they really work to amplify that rage. The black community themes of alienation, really deeply leaning into the alienation, this country isn’t for us. They take things that are already there, and they work with what they have, because there’s a saying that the best propaganda is 90 percent true, and that’s because if it’s easily discredited, people dismiss it, so it has to feel real, it has to feel resonant.
I think the reason that they went so hard for the black community with the suppression narratives is that, candidly, the black community is a powerhouse when it comes to voter turnout. They turn out, they vote, they have historically voted very strongly in alignment with the Democratic candidate. So it’s almost a recognition of the power of the community and a recognition of the deep underlying rifts in our society, that that is the reason why they would lean so hard into targeting the black community.
Oremus: One of the questions that has come up in the wake of your report is, well, did this swing the election? That’s a question that people keep coming back to, but I think you’re kind of tired of that question, right?
DiResta: You know, I don’t have an answer to that question. Nothing in the data set that I was provided would give me the answer to that question. I think there’s a couple things here. First, in as far as is the only thing worth investigating whether or not Russian flipped an election? I would say the answer is no. I mean, I was talking to somebody in law enforcement, and the way he put it was, “Attempted murder is still a crime.” You still go and investigate it, even if they didn’t get it done. There was an assault on American democracy. There was a foreign adversary who spent three years manipulating and targeting American individuals, pretending to be American, interfering in social conversations, political conversations.
That is something that we need to understand. We need to understand how they did it, if only to detect it faster in the future, because there’s no indication that they’re going to go away. As far as they’re concerned, it was successful. They reupped their budget. So I wouldn’t say frustration, because I understand that everybody would like an answer to that question.
When we think about impact, I think it’s also important to look at, did this change attitudes within the community? Did it shift the Overton window? Was the propaganda effective? It continues to perpetuate. It continues to be propagated in the communities that it targeted, and I think that’s in part naïveté. Who goes and looks at a meme and thinks, “Oh, this is Russian propaganda?” I do all the time now, but most people don’t. And so this content is still out there, because there is a kernel of truth in a lot of it, and that’s what makes it such a challenging conversation.
Glaser: One question I have is about calling this information warfare, which is a phrase I see batted around a lot in these conversations. One hesitancy I have with it is that by calling it war, putting it in that militaristic context, it could open the door to increased surveillance of social media platforms that could potentially affect communities that are already oversurveilled, particularly by police that are closely watching the communications activity, say, of black American communities. So what are your thoughts on this kind of framing of information warfare? Is it useful? Are there pitfalls?
DiResta: You know, I wrote that essay on the digital Maginot Line, and I used the metaphor of war in there. It took literally six months for me to feel confident releasing that piece, in part because I really worried about the terminology and the war metaphor. What I would say to that is that they think of it as a war, and that is where, if you read the indictments or if you go and you read Project Lakhta, the way that they describe what they’re doing, this is not a, Oh, we’re just going to mess around with some Americans. They have real strategic objectives. This is a toolkit that they have, and this is the framing that they use.
I got to thinking as I read more of this, or even if you look at domestic trolling groups, they’ll use the phrase meme war, right? The great meme wars of 2016. So there’s this sense among people who use these tactics and believe in the power of the outcomes that they can affect, and they are using terms like war. Then the rest of us are kind of over here talking about well, it’s some shitposters on the internet. There’s kind of a real divide there in how we’re thinking about it. We’re treating it as like, oh, this is just a problem of governance. We just have to do a better job detecting this stuff earlier, as opposed to thinking about ways to deter it, which is the framework that you would use if you were thinking about it more in militaristic terms.
So I absolutely understand the reservations, and I feel them acutely myself. At the same time, I don’t think that we’re well served by pretending that these are just sort of disparate attacks that happen to look the same way, when they really do have, in many countries, the goal of regime change.
Oremus: I also had a few reservations about the use of “war,” although I can understand it. One thing that happens in wartime is that you might suspend normal laws or normal civil liberties or that sort of thing, and I would worry if that’s one of the implications of it. But I see your broader point, that this is a long-term thing, many states are involved in it. It’s just going to keep growing. It’s not a one-off, and it’s something that we have to be prepared for.
I wanted to talk a little bit about your other experience, studying misinformation and virality and network effects on social media, and what it is about the social networks that made them so vulnerable to this. I’m curious whether you think: Were they the victims of this, or were they culpable? And what about the social networks enabled this Russian campaign to be so effective?
DiResta: I think that there’s a structure. Our information ecosystem evolved in a certain way, and you can trace back how the platforms kind of grew and acquired other companies. Really, we amassed an information ecosystem that’s largely controlled by five big entities. I think the interesting challenge of that is that it does kind of create these ready-made audiences for propagandists. Simultaneously, they know quite a lot about the users on their platforms, because they’re serving them ads, and so they are gathering data about those of us who use the platforms, constantly, for the purpose of selling ads, but then also for the purpose of making recommendations.
They have to keep you onsite in order to continue to serve you content, and as part of that, this is where curatorial algorithms come into play. We can talk about the tactics of the IRA all along, but what is the information environment that leads to those tactics? Why do those tactics work? And I think that we have this idea of mass consolidation of audiences, precision-targeting, and then gameable algorithms. When you have these curatorial algorithms, particularly early in 2015, 2016, you might remember what a disaster Twitter trending was.
Prior to Twitter really taking into account things like quality of accounts, any botnet could make anything trend, and regularly did. This is where the architecture of the information ecosystem just lends itself to influence operations, in part because they are producing content with the goal of virality. They’re producing highly emotionally, resonant content. They’re oftentimes really working hard to kind of own their keywords and make it so that when you search for a term, they are what you find. This is how the environment has evolved.
When we talk about victim, the idea of the tech company as victim, I would say they did not expect this, and this is not necessarily the kind of alignment that one would expect, right? Who’s thinking about, “How is Russian intelligence going to game my platform?” But where I do kind of assign some culpability is that starting in around 2015, we were talking about ISIS, and we were looking at other malign entities, terrorist organizations that had begun to kind of co-opt the platform. If you remember the conversation around that time, people were really like, Uh-oh, what if we kick ISIS off Twitter? Who’s next? It was framed as this slippery slope. It was binary. It was either we let them stay on here, or we’ve just moved into an environment of mass censorship.
We really erred on the side of not moderating, and the government and the tech platforms really weren’t collaboratively working together in any way. This is relatively soon after the Snowden revelations. They didn’t want to be seen as cooperating with the government. So there was this moment, this almost like inflection point, where we could have taken the time to look more deeply at this stuff, but it was just seen as well, this is just one random terrorist organization and not a big deal. As we look back at that, if you look back at that now, you’ll realize that while that was happening, the Russian operation was already underway.
And if you look at DARPA, the Defense Advanced Research Projects team whose job is to prevent strategic surprise, in this case they were running studies looking at whether propaganda on social platforms was going to be a problem, starting in 2012. So there were indications that maybe we should have been thinking about this and weren’t. I think it’s really hard, and I’m not really interested in pointing the finger of blame back. I think where the finger of blame is warranted is actually in how they comported themselves through 2017, when there were indications, when this became abundantly clear, and we still had that kind of hedging from Facebook, as opposed to being immediately transparent about it. I think they’ve come a long way since then, but that was a particularly tense time. How do you make this behemoth company accept accountability for what happened?
Oremus: So what is your recommendation at this point? I mean, as you said, there’s been a lot of movement in terms of what social media platforms are willing to do these days in moderating, in working with the government, in taking the advice of researchers or experts. But is there more that’s needed? I mean, are we in a better place for 2020 than we were for 2016?
DiResta: I think we are. I think we are, because I think some of the calls for multistakeholderism from myself and others have actually been resonant in some ways. We had a great example. The investigation project that I did was outside experts working with the government to understand what happened. The third piece of that is really creating this multistakeholder system that incorporates the tech companies, and that rather than being backward-looking, where we’re all still trying to suss out what happened in 2016, that informed by these findings, we say this is how this investigation went, this is how it could have been improved, this is how we can structure it to be forward-looking and to find things that are going to be a problem in 2020.
I think that the 2018 midterms were almost like a pilot project for that. I know I was in touch with, I could back-channel things to tech platforms. Hey, look at this bot, hey, look at this thing. And they were very receptive to it, so we had moved past that period of OK, thanks, whatever, and we were solidly in the realm of This is great. Our team will get right on this kind of thing. That’s where I hope that we kind of formalize some structures to continue on the positive work of 2018.