S1: This ad free podcast is part of your slate plus membership.

S2: Welcome to the show about how technology is changing our lives and our future.

S3: I’m April player and I’m well, ramus.

S4: Hey, everyone, welcome to. Then coming to you from Slate and Future Tense, a partnership between Slate, Arizona State University and New America recording this on the afternoon of Tuesday, October 30th.

S2: On today’s show, we’ll look further into the presidential election in Brazil and how tech has played a role. On Sunday, the far right candidate Jacob wasnâ€t was elected president.

Advertisement

S5: And many have attributed his victory to massive misinformation spread through WhatsApp, which is, of course, owned by Facebook.

S3: We’ll talk about the newest tech from Apple, which had an event in New York City this week. It launched two new Macs and a new iPad.

S6: Yes, Apple in the Big Apple will also be joined by Joan Donovan, the lead researcher at Data and Society who focuses on online hate groups and all kinds of manipulation on social media. Donovan has done extensive research on how communities of hate use social media to recruit and organize. It’s a conversation that, sadly, has been reignited this week following the horrific terrorist attack on the Tree of Life synagogue in Pittsburgh over the weekend. The shooter, Robert Bowers, has been an active user of this free speech absolutists social media platform GAB that has become kind of a digital play pen for neo-Nazis and white supremacists. Gab went off line Sunday night.

Advertisement
Advertisement
Advertisement

S7: And as always, we’ll end with don’t close my tabs. Some of the best things we saw on the Web this week.

S5: Okay, Will. So I’m talking to you from New York or Delaware. You know, you were just in New York, right?

S3: I had made a whirlwind trip up to New York for the Apple event. It was a rare Apple event in New York. Usually they have them in the Bay Area. So I made the trip. I’m back in Delaware now and I’m talking to you from there.

S6: Yeah. And so how was it? What was the vibe?

S3: It was interesting. You know, I’ve always covered these things from afar. This is my first Apple event. And from afar, it’s very easy to joke about the stuff they’re saying and snark and make funny comments on Twitter and criticize. It’s hard to do that when you’re sitting in a theater surrounded by really genuinely enthusiastic Apple employees who are just going nuts by sheer hands, like who goes to these things, actually, besides, you know, journalists and a few celebrities.

Advertisement

S8: And are they like paid to do this? No, I think I mean, maybe they are, but they do now.

S3: They sing jazz to be there. So I know that for this event in New York, Apple picked some of its retail employees from the various Apple stores in New York City. In fact, I was sitting next to a few of them. They were like there was like the thrill of a lifetime for them. And they do put on quite a show. Even the smaller shows, their fall events are not the big iPhone event. But this was to announce a new MacBook Air and a new Mac Mini and a new iPad Pro. So kind of their second tier products, but they put on a big show. They had Lana Del Rey at the end give a give a little concert. Naomi Campbell was hanging out there. I saw her. So it’s they definitely bring the star power.

Advertisement
Advertisement

S8: So it’s so tell me about some of the new Apple gadgets. I don’t know. Improvements or not improvements. Things we need. We don’t need. What’s new?

S3: The headliner was the iPod Pro. But I want to talk about the MacBook Air because I’ve been waiting for this for so long. It’s just such a wonderful everyday computing device. And they haven’t updated it in since 2015. They finally came out with an all new one. And it has most of the stuff that you would hope for and expect it has the retina display that the last one did not have to the disappointment of a lot of Apple fans. It has touch I.D. It has a force force touch trackpad USGBC. They actually managed to fit in the same size screen in a smaller computer, not just lighter, but it’s actually less volume and it makes it even easier to handle than the old MacBook Air. So it seems like a great device. That said, there’s nothing really exciting about it. I mean, they’re not they’re not pushing computing forward. It’s not going to change the way anybody interacts with with their devices or with the world.

Advertisement

S8: Okay. But you know what you told me before we started taping this, that you sense that they’re moving away from laptops, which to me just sounds so obtuse because you would have to like literally surgically remove your my laptop from my lap, from my hands. I feel like I’m always scrunched down in a corner somewhere typing furiously as a journalist. So what is what were you getting out there?

S3: Yeah. I am glad they’re not giving up on the MacBook Air. To be clear that they also have the MacBook and the MacBook Pro and the MacBook Pro. Both those actually had all kinds of problems with the keyboard. Other issues. The MacBook was too expensive and too limited really to replace the MacBook Air. So I’m glad to see that for the record. But they just it just seems like the i-Pad is what they’re really pushing. As the future of personal computing, they went ahead and compared iPods, sales figures to the sales figures of P.C. laptops. They seem like they’re positioning the new iPad Pro as the device that they think everybody will want to have to put in their purse or put in their. Bag to take to the coffee shop and set up shop for a while or to work on their couch at home. And you know, it’s getting better at that. It’s more powerful. Now you can snap out your very easily snap on a keyboard case, but it just is not the same with ISIS as it is with Mac OS. Anybody who’s ever tried to do like a spreadsheet on OS, they’ve been working on it, but it still feels like you’re trying to get work done on a device that’s made for pleasure. I still find it an awkward fit.

Advertisement
Advertisement

S6: All right. Well, thanks for the dispatch. It sounds like you’ve survived yet another Apple event, but this time in person. We’re gonna move on now to our interview.

S9: Today, we’re joined for a special segment on the presidential election in Brazil that finalized on Sunday evening with extreme right candidate Jai Bell scenario winning his rise has been marked by his homophobia and racism and troubling comments against women, as well as his praise of Brazil’s former violent dictatorship. Boston area grew in popularity and large thanks to the platform offered by Facebook and WhatsApp or misinformation. On both scenarios, opponents spread like wildfire ahead of the election. With us today is Pablo or Toledo, who is a professor of public policy at the University of Sao Paulo in Brazil. He is the author of a report that found that over 50 percent of the most widely shared images on WhatsApp ahead of the election were misleading or flat out false. He wrote about the study in a New York Times op ed co-authored with two of his colleagues earlier this month before the election closed. Professor orto Lardo, thank you so much for joining us.

Advertisement

S5: My pleasure. So let’s start with kind of the basics of how Facebook and WhatsApp played a role here. Can you help me understand how the social media platforms work to inform voters or give a platform for such an unconventional candidate?

S10: The first thing is that the Brazilian political campaigns are public funded. No, you only can you. There is no private money or very little private money in the political campaign. There’s only public money. And the public money is dris be distributed according to the number of seats that you have in Congress. So since Wasnâ€t was in his small party, he didn’t have any money to run for his candidacy. And he also in Brazil, you also have free airtime on TV and radio to broadcast your political campaign. And this broadcast time is also in proportion of the number of seats that you have in Congress. So he didn’t have the two most important assets for running for president, which are money and airtime.

Advertisement
Advertisement
Advertisement

S11: So his strategy was entirely based on using social media, especially Facebook. And what’s up, Facebook is known for for being a platform that is suitable for a political campaign. As you using in America since 2016. But the novelty of the Brazilian election is that both them have managed to build a campaign wrong WhatsApp, which I don’t think it’s even a very popular messenger app in the United States, but is very popular in Brazil. About two thirds of Brazilians use WhatsApp regularly. And what’s up is a form of closed chat groups. They’re not public. Only you and your friends are part of the chat groups. There are hundreds of thousands of chat groups in what’s up. He manage to create what’s up chat groups and to distribute political messages directly through direct messaging to people. And because what’s up is base is closed groups and it’s is cryptographic communication. You cannot know what he’s broadcasting, what he’s saying. So he’s communicating with people without being noticed. So he used that, especially in the dirty campaigning to to distribute, you know, this information at the very large scale without being noticed. When people realized there was a large operation in course, it was basically too late. So our article in The New York Times was basically telling, what’s up? The company that we had noticed that they are using this as a strategy and that what’s actually take measures to prevent that, because they’re basically using that to do political campaigning, basically made of lies. We measure the activities of two two hundred forty seven groups that posed invitations online. So are kind of public groups, even though they are closed to they published online inviting people to get in. And we monitor those groups during the first round of the elections in Brazil. And we find that we found out that 56 percent of the messages were basically lines of information taken out of context or playing lies. And only eight percent were truthful material. All the rest was basic. Propaganda.

Advertisement
Advertisement
Advertisement

S6: So what’s it about WhatsApp that makes disinformation spread so rapidly? Why doesn’t information. That’s correct. Spread as well.

S11: What’s up? Was we’re regionally a messaging app, but it also incorporated some broadcast features so you can send the send message, the same message you two hundred fifty six people and you can create groups without people authorizing. You can just add people that you have the number and put them in a chat group and you can waad messages or regionally to two people. Now, this this has been restricted to 20 people.

S10: So basically what we did because of that, you can communicate to a large audience if you broadcast a message to two hundred fifty six activists and those two hundred and sixty six and fifty six activists are each in chat groups with an other two hundred and fifty six activists with only two in one minutes and two minutes. You can communicate with hundreds of thousands of people without being noticed. The thing is that you communicate without being noticed because you are not in the public sphere. You’re not in the public arena. So this allows you to do this information without being noticed. You do know who is behind, you know who produced DeLys because it’s just a number. And they were apparently what the campaign was doing was by numbers in United States and throwing that away. What’s up? Released a report saying that they have cancelled and blocked from what’s up hundreds of thousands of numbers during this election process. So there was a really large scale operation in order to do this communication.

S3: Right. And you mentioned as one of the examples of the misinformation that voters in Brazil vote with a number that corresponds to each candidate. One of the viral false stories that went around had a picture of a different candidate, Luis and also Lula da Silva. Next, the number 17, that was actually the number four ball scenario. I was curious. Facebook made some efforts to try to control the flow of misinformation. Did they do enough? Did they do anything that you saw that that seemed to really matter? I know that some people were calling for them to limit the number of people that you could forward a message to at the same time. What did Facebook do and did it have an impact?

S10: I think Facebook owns what’s up, but I understand WhatsApp has a different manager and an independent management team. And I don’t think what’s up was prepared to do anything because it was the first time that this happened. I mean, what’s up? As has been used for political campaigning in the peace process referendum in Colombia, it’s been used. It has been used in the Mexican elections in July, but not as the main tool for political campaigning. I think the scale and the centrality of the use of what’s up in the Brazilian elections was completely new. And what’s up wasn’t prepared in order to take measures. So basically what we were told what’s up is that they should limit the number of people you can send the same message which is called broadcast. You can limit the number of full words, the number of times you can forward a message in you. You should limit the number of groups you can create and the size of the groups. So it could. That’s how limited that strategy. Because the problem with what’s up is that you have mass communication with secrecy. Secrecy is very good for interpersonal communication because it protects privacy and freedom of expression. So it’s very those are very solid features. But when it’s combined with mass communication, this combination, it’s not good because you can talk to a mass audience without being noticed and you can use that for a large disinformation campaign.

S11: So what’s up? Wasn’t prepared to do that. When we wrote the company and we said we noticed that and we think you should take those steps. They basically said it’s too late. It will take us a couple of a couple of weeks, maybe a month to do it. And by the time we implemented those changes, the campaign will be over.

Advertisement
Advertisement

S5: Professor Auto Lardo, this has been an incredibly insightful conversation. Thank you so much for joining us all the way from Brazil.

S12: Thank you. It was a pleasure to talk to you. All right. We’re going to take a quick break. When we come back, we’ll have our interview with John Donovan, an expert on hate groups and social media.

S5: Our guest today is Joan Donovan. She’s a lead researcher at Data and Society, a nonpartisan research center dedicated to teasing out some of the most complex cultural and political questions in our increasingly networked world. There she serves as the project lead on media manipulation, which is basically the study of how online can be instrumentalized to manipulate us. Donovan is also an affiliate at Burgmann Glines Center for the Internet and Society at Harvard. I wanted to talk to Joan today because for many years her research has focused on how hate groups use social media to forward their agendas. She’s one of the sharpest minds on this topic. Joan Donovan, thanks so much for joining us.

S13: Thank you both. Great to be here.

S5: So hate groups exist online. And this is something that we’ve seen in the news recently following the attack on Saturday morning at a synagogue in Pittsburgh. The sense the shooter was a member of gab on online social network that allows for free speech. And because of that hate speech, basically anything goes.

S9: But hate groups exist offline, too. I’m curious how important social networks are to organized hate in the United States.

S14: Yeah. So it’s an important question, but it’s a sort of difficult one to answer in the sense that I have to go back about 25 years to start from the moment where the former KKK member. His same is Don Black started Stormfront and he had just recently left prison where he learned program and he realized the opportunity for building a coalition of like minded whites was more likely to happen online than off-line. And he saw the power of the Internet very clearly. And he started the Stormfront message board. And for the most part, the people who are posting on that message board were finding one another online by using search engines. Use that with other forms of b.b.’s boards and all of the ways in which when you finally go down the rabbit hole, you can end up finding some other, you know, other fellow travelers. And in growing the population or the user base of Stormfront, though, people were really leery about going out the public space and talking about these issues. They would talk a lot about on Stormfront, how they were thankful for the community, but they understood why they needed to remain anonymous because they would be shamed if they were to have these ideas in public. And you can look at research found by the SPL. See, Stormfront has led to some of the most egregious, violent and mass murderers, mass murders in the states and is primarily a place where we can track quite a bit of domestic terrorism plots where people talk about the desire to eradicate certain public figures, certain groups of people. And so as we watch the Internet develop, we also watch the forums and the platforms for white supremacy change. And if it weren’t for the political conditions of the early of early 2016, we probably wouldn’t have seen street movements develop around white nationalism. But what was going on as the Magga coalition built on itself, people were both using hate speech as a way to demonstrate their adherence to sort of a fundamentalist free speech principle online. But they were also using Nazi ism and a very ironic way that is, they were trying to get people to spread these messages out of, you know, just essentially shitposting and banter. And then then banter eventually turns into belief. And we watched this unfolds as researchers. When I was at UCLA and there was a clear change in the way in which white supremacists were using platforms to recruit. They were showing up on social media, of course, Richard Spencer and all right. Were fundamental in recruiting bunch of people to these then political platforms. And so we’re gap shows up is actually pretty interesting in the sense that they show up to early 2016 as a platform that will not moderate. And Andrew Tauber, who is the CEO of Gap, starts talking about how there’s anti-white racism on all of these other platforms like Twitter, YouTube, Facebook that are going to take down your posts if you post anything related to being pro white. And in that recruitment, Andrew Corba was providing both a safe haven for white supremacists on his board, but also the essential movement infrastructure that a place like Stormfront envisioned in the early or in the later 1990s, but wasn’t able to bring to fruition. And so. I’m not surprised to see the kind of violence come out of gab. That looks a lot like the kind of violence we’re used to from Stormfront yet.

Advertisement
Advertisement

S7: And I appreciate something that you said early on, which is that it’s hard to disentangle the the online speech from the real world violence. I mean, this is something Aprils written about some, too. When you have this debate, people who support platforms like Gab say, you know, free speech, free speech. But then, as you pointed out, there’s a long history of the folks who use these sites got, you know, Stormfront going back to Underoos Brevik. I believe Dylann Roof was linked to Stormfront. You can correct me if I’m wrong.

S15: And now we’re seeing warmer. He was also on The Daily Stormer. Where? Yeah. And that and that’s another space.

S16: That was a daily Stormer is interesting because it’s a sort of millennial take on Stormfront where Stormfront are now the sort of the kind of boomer white nationalists and Andrew Anglin that that built Stormfront and then incorporated `we’ve as the sysadmin. He understands this. As you know, a tactic is to use irony, but also that people need a place to organize. And so The Daily Stormer was a place, again, where lots of unmoderated speech happened and we saw the results of that daily. Stormer was one of the main spaces for organizing the Unite the right rally right under Anglin Bill, The Daily Stormer.

S17: And so we saw after the Unite the Right rally, the Daily Stormer got D platforms somewhat similar to what happened to Gab after the shooting this past weekend. And I’m curious, we see these companies that interoperate with Gab and The Daily Stormer take sites off line once something awful happens. Right.

S5: And when I say that, I mean, the Internet is a network of networks. And in order for a Web site to run, you have to work with other companies like Web site hosting and security to and and the companies that were working with them decided to pull their services. But these companies had these anti-hate speech policies and anti-harassment policies for years. Right. Like a dream host in and go, daddy. They. They prohibited this, but then they only decided to enforce it once something violent happens. And I’m curious if you could talk about the timing of this type of de platforming. Like, why are they waiting to enforce it until something awful happens? Why didn’t they enforce it years before?

Advertisement
Advertisement

S18: Yeah. In this, I think is just part of the way in which all laws tend to work, which is that laws don’t work without public pressure. We have a lot of laws that don’t ever get enforced because mainly if there isn’t public pressure to do the work, then, you know, a lot of things just go unremarked upon.

S13: And so even the content moderation industry as it’s developed has been plagued with inconsistencies and and, you know, claims that all content moderation is censorship and that this space that we call the online should be completely unregulated, should be jurisdiction less and should be completely unmitigated, you know, free flow of thought. But what we know from relatively better moderated spaces online is that the conversation tends to stay with the themes of the what the people on the platform are interested in talking about. But when you don’t moderate anything, this is sort of, you know, maybe like the third law of the Internet is everything turned to Naziism.

S15: This like. Right. Right.

S13: No, it’s just this weird thing where it’s like, yeah, I thought this site was about posting pictures of cats and kids and eventually it’s just cats, kids and swastikas and nobody understands why. And that’s because there’s nobody doing any enforcement. And so the issue with the content moderation industry is that at the same time that a place like Twitter wants to hold out, that there is a free speech wing of the Internet. They also realize that if people who are their users that they want to keep believe that the company supports trolling, harassment, white supremacy, foreign espionage, then people are generally going to think it’s a bad product and they’re going to opt out eventually. And then what you’re left with is sort of what Gab has decided to double down on, which is that eventually space’s online that are unmoderated do tend towards social norms, that reward violence, that reward misogyny, that reward, you know, extremist speech and behavior. And so it’s important that other platform companies hedge against this and learn this lesson that CAB is not since is not censorship free. It becomes a cesspool by design.

Advertisement
Advertisement

S5: Can you tell us about the role of Meemaw’s and humor in these online hate communities? I’ve done quite a bit of research in here, too, and I see this grey area where it’s hard to tell if they’re joking or not. And it almost seems strategic.

S14: Yes. So one of the researchers here, Matt Gertten, has written about critical trolling and about ironic trolling online.

S18: And one of the things that we see time and time again is we want to know what the intent of the post is. And I suggest it if you want to know what the intent of someone is. Look around for other statements. And so, for instance, online, they call him the Magga bomber, the Unabomber, which is the person who sent out the 15 bombs last week. Not going to say his name, but he drove a van that was covered in means, covered in names. And some of them he placed his own crosshairs on politicians. He was making his own Internet media. And I think that when you look then around that the other kinds of posts he was making online, he was both sharing names and sharing in this world where we can’t know what intent is. And then he was also making death threats. And you can put some of this stuff together and think, well, you know what? He’s really invested in anti Semitic, anti-Muslim, A.I. media, anti Democratic Party messaging. Plus, he’s making death threats. Right. And so when I looked at, you know, all the ways in which people were trying to use Arnette irony to get out of being associated with Unite the Right again, we were seeing that this claim that nobody knew what their intent was was really hard to follow. If you look around at other things, they were posting like pictures of themselves with guns and shields and homemade cajoles, pictures of them or statements of them saying, you know, can I get away with bringing pepper spray? So you can you can look around and you can see that when people are organizing for mass violence, they betray themselves in the sense that the meme isn’t necessarily the only form of communication. And the other thing is, is that people who are recruiting for movements, especially white supremacist movements, know that one of the key points in radicalizing someone is to get them to laugh about what it is that they think is a problem, and then to experiment with using dangerous speech, to become familiar with those concepts and to be calm, comfortable uttering those words and to become comfortable with the reaction one might get for using those statements in public.

Advertisement
Advertisement

S5: So these spaces, these kind of online spaces where people can go and express hate speech freely or or ask questions about hate speech, they’re not just places for people who already adhere to these beliefs. There are also spaces where people who are curious can show up. And if they’re easy to find, then they can possibly be recruited and learn something, right?

S16: Yes. When I was doing my research at UCLA on white supremacist use of DNA ancestry tests, we were looking at Stormfront and we’re looking at the styles of posts online and how people show up with questions. One of the things that we discovered is that Stormfront does pay moderators to receive questions and they will provoke promote dialogue on the platform to try to get people to engage. And I’ll tell you that in 2015, when you Google searched for white nationalists, the first response was a direct link to Stormfront. You did find information about white nationalist families, white nationalists. Right. That’s different. And so, you know, the spaces that people go to to ask these kinds of questions are position to do engagement in our position to get people to look for these specific keywords so that they will be the ones that get found. Dylann Roof, for example, went to Wikipedia first to learn about Trayvon Martin. And in that, he looked he found this one set of keywords very curious, which was black on white crime. And that’s what he searched for in Google. And he found the counseling service citizens who have info, graphics and things about our info pages, about black on white crime and statistics. And that’s where he became interested in these concepts and he became interested in why people considered George Zimmerman white. And so when we’re looking at this, what we need to understand is that this is tied into an entire. The technical system that depends on people not knowing what to ask and not knowing where to look for help.

Advertisement
Advertisement

S7: All right. So did zoom out for just a second. It seems like there’s a consensus now that people should not be able to spew hate speech, threats of violence on big platforms like Facebook and Twitter. Those companies may not be very good at moderation yet, but they do seem to be taking it seriously. So then people go to a site like GAB, which is pretty much explicitly about being able to say the things that you can’t say on Facebook and Twitter anymore. No gab is down and perhaps rightfully so. And where do people go next? They now go to private Dischord servers where nobody can find them. And what’s the what’s the end game here? What’s the what’s the path that we’re on? We’re just chasing these people into darker and darker holes where no light can get out. And, you know, is this is this progress? Is this, you know, is it just sort of this this endless push to get them out of the public eye?

S18: Here’s how I think about the Internet. The Internet is a process, not a product. It’s just like how we wake up every day and decide that the public square isn’t going to be taken over by neo-Nazis that day. You have to remain vigilant. You have to remain on target. And you’ll have to. When companies do provide that support and that consistent and stable infrastructure, you have to call them to task because remember, torment making money off of Gap, right? It’s not just that. CORBA is providing a stable infrastructure for white supremacists. He’s also soliciting donations. If he’s going to get a big check from, you know, people who are interested in like fundamentalist free features because they believe in what he’s doing. And so part of the ways in which some of these platforms operate is on this, you know, techno capitalist model, where every crisis is an opportunity to raise funds. And so we got to call that out. We’ve got to leave it there for what it is. But at the same time, yes, of course, they’re going to move to other places. And hopefully when they get to those other places, there are other people waiting to say, not here either. You know, the reason why we end up with 4chan and 8chan isn’t because 4chan was so disgusting. They let people stay. It’s because 4chan said no. And people had to go to 8chan and at some point people on H.A. are going to say no and they’re going to have to go 16 chan. But you know, ultimately the process model is what you have to work with in your mind of saying we have to remake society every day and we have to make it in the image of what we want. And if we get the Internet that we deserve, it’s because we’re not actively participating in it.

Advertisement
Advertisement

S17: John Donovan, thank you so much for joining us.

S19: Any day.

S17: One final quick break and then don’t close my taps. Some of the best things we’ve seen on the Web this week.

S7: Hey, listen, just before we get to TAB’s a quick post-production note, we had a technical glitch with the recording of the segment. So if we sound like we’re talking to you through a tunnel under water from far away, it’s because you’re listening to a backup tape that we fortunately kept of our Skype conversation. Sorry for this glitch. And hopefully it won’t happen again.

S20: It’s time again for don’t close my tabs. Well, let’s start with you.

S21: What do you have left open this week to the story from The New York Times? It was a fairly big investigation. The headline was How Google Protected Andy Rubin, the father of Android. It was actually not just about Andy Rubin, but about three Google executives who had been credibly accused of sexual harassment or misconduct. And in all three cases, Google protected them in various ways. They they kept the Clintons quiet. And in the case of Mr. Rubin, who is a very highly placed executive at the company. Not only did they keep it quiet, but when they found that an employee had made a credible accusation that he had coerced her into performing oral sex in a hotel room, they gave him a hero’s farewell. According to The Times, they paid him $90 million exit package and said nothing about the real reason he was leaving the company. This was a big and disturbing exposé about the company whose motto was once Don’t be evil. You reported, I believe, exclusively in Slate day to day that one of these people still works for Google.

Advertisement
Advertisement

S22: Yeah, I learned from sources and they published this last night that Richard Duvall, who is a director at X, which is a sister company to Google under Alphabet, it’s a kind of experimental projects moon shoot lab of the company. He had told an employee or not an employee, but rather a candidate during a job interview that he was polyamorous. And then later, before she had found out if she’d got the job, I invited her to say hi at Burning Man. She went to Burning Man that high because she was planning on going anyway, to some extent, perhaps said hi to him. Still waiting to hear back from the job. He then asked her to take off her shirt so he could give her a massage. She was young, an engineer. She ended up not getting the job when she reported this to Google a few years later. A couple years later. They asked her not to say anything, except that it sounds credible what she said.

S23: And I found out that as a Friday at least, she was still working at the company.

S22: And, you know, he had continued to work at the company, we know, even after this was reported. I just can’t even imagine if I was applying for a job at a newspaper or magazine that I really wanted. And the editor I saw later at a party then asked me to take my shirt off while I was still waiting to hear back from the job. Like what I would do is just terrifying. So what I heard from sources there is that there is a sense that the company kind of shelters people who act in this way, particularly men who act in kind of a harassing way towards women. One source told me there’s an increasing sense that Larry and Sergei, who are co-founders of Google, may be the problem. The source continued, I don’t think they’re abusers, but they sheltered them. They clearly think there’s some amount of value they’re getting out of these men that outweighs the women they’re preying on. This is also in response to the fact that Sergei Brin has had a very public affair with a employee at Google who is working on Google Glass a few years ago. And and Eric Schmidt, it was reported in the Times, once retained a mistress. This is a quote to work as a company consultant. So there’s a sense that this is just a deep problem that goes kind of runs through the company all the way to the very top. The CEO of Google’s own Sundar Pichai, shared on Thursday with the company’s vice president of People Operations, al-Ain Mountain, that Google had fired forty eight people in the past two years, 13 of whom were senior managers for sexual harassment. So that’s like twenty four people a year in the past two years. I mean, Google’s a big company, but that seems like a lot.

Advertisement
Advertisement

S21: Yeah. And there was a planned walkout by some Google employees think.

S22: Yeah. So, you know, people at the company, which let’s be clear, Alphabet is the second most valuable company in the world, are definitely whirling over The New York Times report and how Google handles credible claims that come in from women who work there.

S24: Yeah, definitely heard rumblings that this was in the culture of Google from the very beginning, sort of an original sin of the company. It’s disturbing to read about now and maybe good to do finally being forced to confront it, albeit extremely belatedly.

S22: Yeah. So I don’t expect this to be the end of the story there, but unfortunately my tab is not any brighter. But it is very interesting.

S20: All right. What was your target this week? I know we like to have a bit lighter notes in the tabs, but this is just really been such a heavy week with all the difficult news. And I don’t have an up note to end on here, but I watch this fantastic documentary, The Frontline last night on Facebook. And I wasn’t expecting it to be fantastic because who wants to sit around watching it? A documentary can help Facebook, but it actually really, really was so interesting. And I know the reporter on this learned a ton. It is called the Facebook Dilemma. It’s a two part series. There’s another episode of it. Tonight, we’re recording this on Tuesday, which I plan to watch. But it was about or at least one of the things that I learned last night was there was misinformation that was circulating around during the Egyptian revolution back in 2012. And that Facebook was well aware of this and that it was difficult for activists then and that DARPA had published something like 200 reports on misinformation and bots and things like that that were a problem on Facebook. So Facebook had been aware of this problem that we’re now having this public reckoning with with disinformation for so long. Right. And so it really brings into question the culture of the company and the level of responsibility they chose not to take. Perhaps really well-done documentary. You know, I just said that the Egyptian revolution was 2012 by 2011. Sorry, long day, but I really haven’t been cut out. Well.

Advertisement
Advertisement

S24: Yeah, I’m looking forward to it. I’ve been meaning to watch it, actually. And there’s definitely been misinformation on Facebook for a long time. I remember this wasn’t as far back as 2011, but in 2014 I did a story about hoaxes, hoaxes, about copyright notices that you could supposedly paste on Facebook to protect your information, hoaxes about bananas curing cancer, hoaxes about vaccines. And I talked to Facebook back then about it and they said, you know, we’ve never really tried to do anything around worrying about whether what’s said on our platform is true or not. And they kind of mused, that’s interesting. You know, maybe that’s something we think about in the future, but it just clearly not on their priority list even in 2014.

S20: It’s really interesting because, you know, in 2011, the big news story was that Facebook can be a major tool in upsetting an entire government. Right. There was the social media revolution. It was kind of the headline that people liked to say.

S22: And and, you know, they they didn’t seem to take that seriously or if they did take it seriously, they didn’t seem to think through problems or think through how they should protect the platform if it’s going to be used in such valuable ways.

S24: Yeah, it’s just such a time of optimism that people thought the people running these platforms thought that they would just default to being forces for good in the world. Obviously, now we know that’s very much not necessarily true. And now we’re trying to go back and figure out what went wrong. Anyway, I look forward to watching the documentary. Thanks for that.

Advertisement
Advertisement

S20: Yeah, I guess optimism and billions of dollars is a powerful drug, but that does do it for our show today. You can get updates about what’s coming up next by following us on Twitter at ifwe then pod.

S25: You can email us at issue then at Slate.com, send us your tech questions, show and guest suggestions or just say you can follow me and will on Twitter as well.

S26: I’m at Apryl Laser and Will is at Will Orneriness. Thanks again to our guests. Pablo or Lato, you can follow him on Twitter at Pablo or to a lot of those and underscore between his first or last name and Joan Donovan. You can follow her on Twitter as well at Boston.

S25: Joan, thanks to everybody who’s left us a comment, a review on Apple podcasts or whatever platform used to us. And we sincerely appreciate that. And if you haven’t had a chance to do that and you do listen to our show sounding like an NPR pledge drive here, but please pay us back with a nice review. It helps us a lot.

S26: If then is production of Slate and Future Tense, a partnership between Slate, Arizona State University and New America. Our producer is Max Jacobs. Thank the Baker Sound Studios here in Philadelphia, where I’m recording from today.

S25: And thanks to Nick Holmes, occupied studios in Newark, Delaware. We’ll see you next week.