In recent months Twitter has been accelerating its long-running battle with fake and suspicious accounts in order to clean up the platform—a place that has been criticized as a hotbed of lies, hoaxes, and abuse for nearly as long as it’s been praised as a place where almost anything goes. The Twitter executive leading these moderation efforts is Vijaya Gadde, the company’s legal, public policy, and trust and safety lead. I recently interviewed Gadde on Slate’s technology podcast, If Then, where we discussed Twitter’s current approach to harassment, hate speech, and misinformation; whether the social network’s very structure encourages these problems; and why conspiracy theorists like Alex Jones are still allowed to have Twitter accounts. Our interview has been edited for concision and clarity.
Will Oremus: Is it fair to say that a big part of your job is to keep Twitter clean and safe, and free of trolls and bots and harassment and abuse?
Vijaya Gadde: I think that’s certainly a big part of my job, but I would be remiss if I didn’t tell you how many people at Twitter’s job that is. There is an entire network of people cross-functionally across the entire organization who work on this and who focus on this day and night, to make sure that Twitter is a platform that promotes healthy conversation.
So you’re leading a big team of people whose job is to keep the bots and the spammers away, to protect elections from foreign interference and meddling, to police hate speech and abuse and harassment, and promote healthy dialogue, even among people who disagree with each other. And so I guess my first question for you is: How’s that going for you?
Well, I think … just to clarify, Will, my team is responsible for setting the policies that govern behavior on the platform, so we obviously work really, really closely with operations teams and the product teams to enforce and make sure that we’re living up to those policies. But taking a step back, what I’ll say is this is an ongoing journey that we as a company have been on, probably one that we were late to focus on, to be fair. And one that is going to remain a priority for us because I don’t think that this is a static landscape in any sense of the word.
As we get better at certain things, new threats tend to emerge. So this is an ongoing battle that we’re engaged in, and one that we take very seriously. And I think one of the things that I’m most proud of is our commitment to continuing to get better and better, even in the face of what is a lot of fair criticism, and the face of really, really daunting challenges in front of us.
Maybe there’s a way I can reframe that initial question that’s a little more fair, which is: How well should we expect this battle to be going, and what standard do you think Twitter should be held to? Is the goal to have no hate speech, no abuse, no bots, no trolls? Is that realistic? How do we know if Twitter is doing a good job or not? On the one hand, anybody who uses Twitter could tell you that they still encounter a lot of ugly stuff on there. On the other hand, there was a report recently that you guys are suspending a million fake accounts a day? Can that possibly be right? I mean, obviously you’re working extremely hard on this, and it’s still not a solved problem. So what would success look like for your team?
I think that’s a great way of framing it. I think what we hold ourselves accountable to is a couple of different things: providing a lot of transparency into what we’re doing, providing clarity for the people using our service, and then being consistent. Because I think that that is really, really important. I think that we have a lot of room to improve in all of those areas. And I think it’s absolutely fair for people to hold us to a higher standard because I think one of the things that our CEO Jack [Dorsey] has talked about is that we were slow to really acknowledge and understand all of the real-world ramifications of behavior on the platform.
All of that being said, I do think that we are constantly showing improvement. We have shipped so many changes to our product, to our policies, to our operational approach. And this is not stopping. You’re going to continue to see this from us. If you look back to what Jack committed to in March, which was our renewed public commitment to the health of the public conversation, making sure that we’re not just focused on removing bad actors and bad behavior from the platform, but that we’re actually also starting to figure out ways that we can encourage healthy dialogue and healthy conversation, even if the conversation is about a topic that tends to be very, very controversial.
Right, that makes sense. And I actually really appreciated how Jack Dorsey, your CEO, came out and said: We’re rethinking everything. We want to rethink from the ground up. What would Twitter look like if Twitter were the platform that we wanted it to be? And you guys are focusing, as you said, on this idea of conversational health, healthy conversations, and that seems like a good guidepost. But you know, Twitter has a couple of things in its fundamental structure that make that hard, I think.
It’s both public by default—and you can go private, but by default it’s public—and it allows for anonymity. And that’s just an explosive combination. You see it on Reddit, you see it in the comment board of any news site, and you see it on Twitter. Has Twitter ever thought about rethinking either of those things? I mean, rethinking the publicness or the ability to be anonymous—what if Twitter had a real-name policy like Facebook does? Wouldn’t that cut down more drastically on the type of stuff that you’re trying to cut down on, than whacking every mole as it comes up, even if that’s a million a day?
I think that’s a fair question. I think being public is part of the nature of our platform. I think it would be very difficult to see how we would distinguish ourselves as a service in the world without that public nature. That is so core to who we are, and the service that we provide in the world, which is really allowing these public conversations to happen. With respect to our pseudonymity, I guess that is something that we think about a lot because, you’re right, the veil of pseudonymity does allow people to troll, spam, bots, a bunch of other things.
On the balance, the one that I think about a lot is that it also allows dissidents and activists and whistleblowers and journalists in a lot of the parts of the world to speak out in places where they otherwise would not be able to use their real names. It’s all about finding that right balance, and we’ve had a lot of discussions about what that might be.
One of the things, Will, that I’m really careful about is not to judge the Twitter service and the Twitter product through the lens of just one particular country. We are a global service. Over 75 percent of our users are outside of the United States, and so when I’m thinking about policy changes, and certainly something that dramatic, I really would want to understand the impact not just in our society here in the United States, but how that would impact societies in Turkey, in Russia, in a bunch of other places around the world.
And so I think that there is some work that we can do. I will say that other platforms that might have a real-names policy still suffer and are plagued by some of the very real problems that we suffer from. And so I don’t think just changing our policy to say real names only would solve these problems overnight. I think some of the problems and challenges would get a little bit easier but certainly other things in terms of enforcement at scale would still be a problem.
So I think about this a lot, but we’ve also done a lot of work in the background, working with our product and engineering teams, to really be able to go after some of the repeat offenders that use anonymity or pseudonymity to come back to the platform, and use a bunch of technology signals and other information that we have to try to block those sockpuppet accounts even before they come back. And we’re having more and more success there, I would say.
So philosophically, I still think it’s a fundamental aspect of Twitter that makes it so great in so many parts of the world, but I understand the challenges of it, and we’re working to see what we can do to make sure that we’re still finding the right balance between those things.
You talked about finding the right balance, and I know that that has been the objective for quite a while now at Twitter, when you’re talking about balancing the desire to allow people to speak their minds, even with controversial ideas, against the desire for people to feel like they’re safe and not being attacked for their identity, or elections being manipulated, that sort of thing. I wanted to go back to a quote that always comes up when people talk about the history of Twitter. They like to talk about this narrative arc where early on the company was radically committed to free speech, and just had this laissez-faire, anything-goes attitude. And there was this famous quote that Twitter is “the free speech wing of the free speech party.” I know the first time I met you, I think that came up, and you had an interesting anecdote about that quote. Do you recall where that quote came from?
I wasn’t at the company when that was said, and I’ve been here for seven years so it’s definitely back in the earlier days of the company, which is about 12 years old now. I kind of cringe when it’s said, not because I am backing away from a fundamental principle that we believe free expression is a good thing, and is a fundamental human right, but because I think it was said kind of off-cuff by an employee in one of our offices, and it’s not something that we as a company decided was our own slogan. So it always kind of makes me laugh because it’s not something that we picked for ourselves, but it’s certainly been attributed to us a lot over the years.
I think what that does is it … I think a lot of people then think that we are absolutists about this, and that means free speech at all costs. There may have been a time in the company’s past where that was the case, and I’m not going to speak about when I wasn’t responsible for these areas or when I wasn’t at the company. But what I can tell you right now is that we do believe that freedom of expression is an important right for people, but we also believe that that is very much balanced by making sure people feel safe in order to speak up, and abuse and harassment that is against our rules, that intimidates people, that inspires fear in people, that silences people, is not something that we want to tolerate.
I wanted to ask you about a specific personality that’s been in the news a lot this week, and mostly with respect to Facebook, actually. And that’s Alex Jones of Infowars. This is a site that traffics pretty routinely in conspiracy theories. Some of those conspiracy theories have been pretty out there and probably harmful to people. He questioned whether the Sandy Hook school shooting was real. In the wake of the Parkland school shooting, when these brave young children who had just seen their friends get slaughtered before their eyes were going on TV and talking about their experiences, he pushed the idea that they were actually paid crisis actors, paid by the left to do a sort of false-flag conspiracy to get anti-gun legislation passed. I mean, that seems like pretty hurtful stuff. Does a person like Alex Jones—why does he have a place on Twitter? Why does he belong on Twitter at a time when you’re pushing for healthier conversations?
Without talking about specifics, although I’m obviously aware of Alex Jones and Infowars, what I would say is that philosophically, we have thought very hard about how to approach misinformation, and, as much as we and many of the individuals might have deeply held beliefs about what is true and what is factual and what’s appropriate, felt that we should not as a company be in the position of verifying truth. Because that is not where we want to be, nor do we think it’s our role or responsibility in society. Now, that does leave a gap for what we call behavior that starts online and might go into the real world, and we definitely have policy-based decisions that we can make about when we see that type of behavior happening, like inciting people to violence as an example—
Like with the Charlottesville violence, where the protests and counterprotests were both organized partly on Twitter.
That’s right. And if there was a specific call to violence, a violent protest, that would be something that we would obviously take action on as incitement to violence. So we think about that a lot, but the fundamental aspect of whether X is true or Y is true and should we ban them because it is not? I’m going to leave that to a lot of people out in the world who spend a lot of time thinking about this and have a lot more context to be able to dispel rumors or myths or falsehoods that happen all the time.
I think that that is one of the things I actually love about the platform, is that you can see a tweet and it can contain a lie or a misstatement or conspiracy theory or whatever it is, and oftentimes the first tweet underneath it, the first reply is, “This is not true and this is why.” And it does put a lot on the people using the service to judge the credibility of those two things against each other, and I think that is an area actually where Twitter can help. And that is something that as part of our overall health initiatives we’re really focused on, which is providing more context about the people who are in these conversations, to be able to give information to the people using the service about the relative credibility of those two accounts.
OK, so just to be clear, I mean, the reason Facebook has gotten criticized so much over its tolerance of Alex Jones is that it’s in this big campaign to fight misinformation, but you’re saying that Twitter is actually not in a campaign to fight misinformation, and fighting misinformation isn’t one of the core jobs that you think that Twitter should take on?
Well, I think that it’s a different characterization. Our job is to improve the health of the public conversation, and part of that is increasing the quality of information on the service. But I am not claiming that we are going to be battling or able to battle every falsehood or lie or aspect of misinformation that’s on the platform. At the scale that we’re operating, I think that’s unrealistic.
Certainly I understand people are like, “But you know this particular thing is false. Why can’t you just take action on this?” That’s just not how our policies work. Our policies are meant to operate at scale, globally around the world. And so, while I do think information quality and battling the spread of misinformation is something that we as a company are focused on, that is not to say that we are going to take action on individual accounts because there are allegations that they are false or misleading.
I know you’re limited in what you can say about individual accounts, but why did it take so long to disable Guccifer 2.0, which was using Twitter to share stolen information, according to the Mueller indictment that just came out? The Guccifer 2.0 account, which apparently we now know was run by Russian agents, or at least that’s what the indictment said, was only shut down this past weekend. But people knew that it had been sharing stolen documents for a long time before that. Do you know why Twitter couldn’t take action earlier on Guccifer 2.0?
I guess what I would say to that is, without speaking as to that specific account, we’re definitely taking a look at our policies to understand any gaps that we might have. We currently prohibit the posting of tweets that contain private information, but we don’t clearly and explicitly prohibit the spreading of hacked materials. That’s not something that’s a clear violation of the rules as they’ve been previously enforced. So that’s something that we’re taking a close look at to make sure that the platform is not being used in ways that are not committed to a healthy discourse.
As to other types of situations, sometimes we are just looking for more facts before we can act, more context, and we try to take action as quickly as we can. In some cases, we just need more information before we can do that.
I went on Twitter before I talked to you and put out a tweet that just said, “Hey, I’m going to be interviewing Vijaya Gadde today on If Then. Send me one good, tough question that I should ask her.” And I thought that what happened afterward sort of illustrated for me both the wonderful things about Twitter as a platform and the challenges. So you replied in a friendly and professional way, and then Jack Dorsey, your CEO, retweeted it and said, “Yes, ask us some questions.”
And then my mentions were just absolutely flooded with questions because Jack has such an immense following. And then Gab.ai—which is an alt-right sort of rival of Twitter, it’s a social platform that’s favored by members of the alt-right—also retweeted it with their own spin, so then I had my mentions flooded with people making racist and sexist comments and that sort of thing, but beneath all that noise, there were some really good questions that a couple people asked.
And so with that long preamble, I wanted to ask you one from Casey Newton, who’s a social media reporter at the Verge. Casey said, “What’s your latest thinking on what verification should mean on Twitter?”
Oh, I’m laughing because I thought Casey’s first question is about the edit feature, so I thought you were going to ask me that question.
I ignored that one because he’s asked you guys that five times.
OK, great. Verification I can handle, no problem. It’s not controversial at all compared to the edit feature. In all seriousness, we obviously paused the official public channel for verification last fall, and we said we needed to holistically rethink the program because it was broken. A couple of reasons, but the main one is it really conflated identity with endorsement by Twitter, and that was not a place that we wanted to be. So we’re in the process of rethinking that.
I appreciate that it’s not moving quite as fast as we would all like it to, and I think you’re going to hear a lot more about that actually from Kayvon [Beykpour], who is going to be tweeting. Kayvon leads our consumer product team and he’s going to be tweeting about that, so stay tuned on that front. There’s more coming. And I think that this is an area where we have some work to do, for sure, but it’s probably not as high of a priority for us right now as making sure that we’re really focused on information quality, particularly leading up to the midterm elections here in the United States.
I know I said I would ask just one, but there was actually another one that I thought was so good I wanted to ask it. It’s from Renee DiResta, who is an activist and an analyst who works on issues of algorithmic accountability and fairness. She’s testified to Congress on that sort of thing. But her question was, “Have your views on harassment changed since becoming a parent? I know that you are a new mom, and I just wondered if that has affected how you think about these issues at all?”
That’s a great question. Thank you, Renee, for asking it. I don’t think so. I try very hard as part of my job to reach out to people using the service and understanding their experiences on the platform, regardless of my own personal situation, and I try to hear from a diverse viewpoint of people. I think being a mom has changed me fundamentally in so many ways that I’m not sure I could necessarily isolate that. But I don’t think specifically there’s something that I’m now rethinking because I’m a mom. I am a very different person, though, so it’s hard to know what about that comes through day to day. But I can’t point to anything specific.
All right. Vijaya Gadde, thank you so much for joining us on If Then.
Thank you, Will. It’s a lot of fun. Talk to you soon.