Hate, Lies, and AI

Listen to this episode

S1: I want you to visualize something from before the pandemic, imagine Mark Zuckerberg sitting at his desk at Facebook, he sort of sits in the center of his office in this giant building, and then they’re always just around him. That’s Karen, how she covers artificial intelligence for the MIT Tech Review. Karen is one of the best I reporters in the country. And she wrote a story recently about A.I. at Facebook. Some of the people she talked to for that story told her about this moment when the people working on I got to sit near Mark Zuckerberg when the team was there, he was like so close to the early hires of the team that their desks were practically touching. Karen, sources told her that this kind of physical proximity was a window into what Zuckerberg valued at any given moment.

Advertisement

S2: Zuckerberg really likes organizing the teams in such a way where he’s always surrounded closest by the team that he cares about. And when he stops caring about them, they disappear from his site.

S1: Why did Mark Zuckerberg move the A.I. teams to be near him? What was it about A.I. that he was so excited about its ability to support Facebook’s growth? Back in 2012, Facebook had about a billion active users by the end of last year, that had grown to roughly two point eight billion, but the company’s growth is slowing.

S3: That’s why he was so excited, because he thought I would continue to drive Facebook’s growth to one day have every single person in the world using Facebook.

Advertisement
Advertisement
Advertisement
Advertisement

S1: What Mark Zuckerberg understood even years ago was how I could help supercharge Facebook’s growth. But what Cameron understood and what she showed in her story was how that same A.I. amplified Facebook’s worst impulses and spread dangerous misinformation.

S2: So I think perhaps what resonated was me linking this conversation, the misinformation conversation, the hate speech conversation. And it’s happening at a time when the capital riots happened in January. And we are all acutely aware that we’ve sort of lost track of how Facebook is working on these issues. And I think people are hungry to know. Cambridge Analytica happened three years ago and they said they would reform and now the capital riots happened. What on earth happened in between?

Advertisement

S4: And today on the show, Karen explains how algorithms got Facebook addicted to spreading misinformation and how the people who were supposed to fix that problem didn’t or couldn’t. It’s a story that made Facebook quite unhappy with Karen. And we’ll talk about that part, too. I’m Lizzie O’Leary and you’re listening to What Next? TBD, a show about technology, power and how the future will be determined. Stay with us.

S1: One of the things that I find particularly interesting about you as a reporter and the lens through which you’re looking at this is you cover a guy, you’re not someone who covers Facebook as a company. So when did you first become interested in writing about how Facebook uses a.

Advertisement

S2: In terms of the way that the EHI reporting landscape works, a lot of the stuff that I cover comes out of big tech companies because the big tech companies are the ones that have the money. They’re rich. They’re the ones that get to do the cool stuff. It’s expensive to do. It’s expensive to do. I it’s expensive to hire people that can do I. So I thought at first that I would be writing about how Facebook I had become so powerful and influential. There is like lots of conversations that are happening right now about just the influence of tech giants on the creation of science, like the production of knowledge. Facebook has this huge lab. There’s not a lot of coverage on it. I’m not really sure how it relates to Facebook as a product. And it’s also just it has its tentacles everywhere. It has collaborations with all of these universities. A little known facts that people rarely talk about is Facebook was the one that invented a high powered facial recognition. It has produced all of these things that are consequential to our society. It affects the culture of our research, the trajectory of our research. I was just thinking about like all of these things and being like, I just want to understand how how Facebook affects us through the lens of a guy.

Advertisement
Advertisement
Advertisement

S1: One way is through research, which is what one of Facebook’s A.I. teams does. They’ve done things like make models that can estimate body poses and translate music across instruments. But then there are two other groups, the Applied A.I. Lab, which takes that research and applies it to Facebook products.

S2: And then they have this third team that is now called the responsible A.I. team that was started in twenty eighteen. That is focused on the implications of Facebook’s A.I. on society. And I ended up through the course of interviews, meeting with walking in your network Condola. He used to lead the Applied AI research lab and then he started leading this response. What I love walking was really unique in that he was literally the person that turned Facebook into an air powerhouse and then started taking on this responsibility stuff.

Advertisement

S1: Tell me a little bit about working.

S2: What did he build that was so consequential for Facebook, where Qin came into the company in twenty twelve and at the time Facebook had very little AI happening. And when he came, he had arrived at Facebook as one of the few people in industry at the time who was already sort of applying A.I. to products.

S1: He’d made ad targeting models for Microsoft and he did something similar for Facebook. His machine learning models could take ad click data and learn that, say women between certain ages would click on ads for yoga pants and then they could target them. Even more precisely, the better the targeting, the higher the chances of a click and so on.

Advertisement

S2: And then other people, the company that were working on news feed were like, hey, the the machine learning algorithms that you built to target users for ads. We could do that to target users with the posts that they like and the groups that they like and the pages that they like. Let’s try to do that here. So then he did it for a news feed, and that’s when Zuckerberg and the chief technology officer, Michael Shreffler, was like, oh, hey, there’s something happening here. We should invest in this technology.

Advertisement
Advertisement
Advertisement

S1: All of this started around twenty thirteen. And the more Facebook invested, the better the algorithms got. In news feed, for example, the models could be trained to predict who would like or share particular pieces of content. Then in turn, users would be shown the kind of content they were more likely to engage with. The concept was wildly popular within the company. One thing that’s pretty well known about Facebook is its focus on growth, and that growth comes from engagement. How do those two things intersect both with each other and then with these machine learning models?

Advertisement

S2: You made a really important distinction in your question, which is that engagement and growth are actually two separate things and people often conflate them. But Mark Zuckerberg first love was growth, and his second love was that engagement, because engagement helps foster that growth. If they can very precisely figure out what each individual user is most likely to engage in and then personalize and tailor their news, feed their ads, their recommendations to their preferences, they’re just much more likely going to comment like share that stuff. What’s interesting is this was happening before I was introduced to the organization. They were already using different design tactics. To try and amp up engagement and when machine learning came in, they didn’t just have design tricks at their disposal anymore, the machine learning algorithms learned that you just feed it all of this data and say, like, here are all of our users here, all the pages they liked here, all the words that they’ve posted and images that they’ve liked and friends that they have. And now you machine learning algorithm to figure out what the patterns are so that you can show this person the content that they will most engage in, or the ad that they will most likely to click on or the group that they will most likely join. And that’s how it’s sort of like cranked up engagement more and more and more over the years.

Advertisement
Advertisement
Advertisement

S1: But there is a problem with this equation. The more controversial something is on Facebook, the more people click on it. Simply put, outrage drives engagement. It’s something Mark Zuckerberg himself wrote about in a post in twenty eighteen.

S2: And he has this chart that very clearly shows that the more that a particular piece of content gets close to being violated content. So content that Facebook does not allow on its platform, which includes misinformation but also includes hate, speech, nudity, all of these other things, the closer I get to that line, the more engagement it has. It’s not specific to Facebook. It’s more just like people like outrageous stuff. They will see the outrageous stuff they will click on and share that outrageous stuff. So they admitted that there was this pretty stark, clear relationship between like if something is misinformation or if it’s getting close to being misinformation, people are going to just start like liking and sharing and way more. At the same time, they still tell these teams to maximize engagement. So there’s this like, very weird, perverse incentive that happens where you can serve the engagement monster.

Advertisement

S1: But you cannot at the same time then tamp down and get rid of all misinformation, all of this seems like the kind of thing I would assume a team called responsible I working his team would be tackling.

S2: I think so, too, because the way that the responsible team was described to me early on in my interview process was this is the team that we created to be a centralized hub of expertise within Facebook for understanding and studying the implications of our algorithms and mitigating any harmful unintended consequences that come out of those algorithms. For me, the way the algorithms amplify misinformation, increase polarization, amplified violent speech, hate speech. That is a bullet point under that bigger mandate that they told me that the response team was going after, but that’s not what they were going after.

Advertisement
Advertisement
Advertisement

S1: What were they going after?

S2: They started specifically focusing on one aspect of algorithmic harm, which is the discriminatory impacts that algorithms can have, the industry is well aware that I can be biased.

S1: Visual recognition algorithms, for example, are less accurate on darker skin. But there was another kind of potential bias that Karen says worried executives at Facebook supposed bias against conservatives. It was something that concerned Joel Kaplan, the head of policy at Facebook and the company’s highest ranking conservative. And it was something that former President Trump fixated on, despite studies that have found no evidence of anti conservative bias at Facebook. In twenty eighteen, the former president latched on to the hashtag Stop the Bias.

S2: The week after Trump tweeted the hashtag Stop the bias, Mark Zuckerberg asked, walking to meet with him for the first time since the response team had formed. And he said in that meeting to walk him that he needed to understand everything about bias and he needed to understand how to quash it in Facebook’s content. Moderation algorithms. Facebook denies that these two things are related. So it’s possible that it’s a pure coincidence that this meeting happened to be called at the exact moment when all of this volume about Facebook’s and a conservative bias was raising. But I think in the context of all of the reporting that’s happened about how Facebook has really pandered to conservative interests over the last four years as Trump has been in power, it’s very hard for me to believe that that was a coincidence.

S1: Do you think the possibility of using AI to crack down on misinformation was precluded because of this focus on bias?

S2: It wasn’t necessarily precluded, but there are interesting dynamics that are created when you pursue responsible AI in the service of growth. Before this afternoon’s work even started in earnest, there were other parts of the company that were already evoking this idea of fairness, to undermine misinformation, work, polarization, work, hate speech, work, election integrity, work. But the policy team run by Joel Kaplan would kind of insert themselves into the work that these researchers were doing around trying to tamp down on misinformation and then tell them that they needed to change their misinformation, catching algorithms, because that because they affected conservatives more than liberals. And this one researcher that I spoke to was like the changes that they asked for made these models totally meaningless.

Advertisement
Advertisement
Advertisement

S1: Misinformation continued to flourish on Facebook, especially after the election. More than one hundred thousand users posted hashtags claiming election fraud, including stop the steal and fight for Trump. Several groups use Facebook to promote trips to Washington on January 6th. You spoke with wacking. Can you know after the capital riots, did he think Facebook played a role in what happened?

S3: When I asked Wal King, what role did Facebook have in the US capital riots, he said he didn’t know. And I asked him, do you think that the response by team now has this obligation? To be thinking about the way that its recommendation systems amplify misinformation. Given that the capital riots have happened.

S2: And there was a lot of hemming and hawing of, well, it’s not really our job, is that really guy that’s causing that? Maybe it’s someone else maybe will work, like maybe it’s another team’s job, but maybe we’ll work with them. Maybe that’s in our future. And I asked him, do you like do you honestly believe that A.I. is not part of the reason why this problem has gotten so bad? And he was like, I don’t know.

S5: Clearly, that is the company line. And it’s hard for me to say whether he was saying that simply because that’s the company line or if he actually maybe believes it.

S6: When we come back, Karen’s reporting makes some waves.

S1: When she was done with the reporting, but before publishing her story, Karen did what any good journalist does. She sent Facebook an email detailing the allegations in her piece and giving them a chance to respond. Karen says she heard back that the story seemed liked but got no factual arguments. And so she published. Then various people who work at Facebook started criticizing her story on Twitter, saying she hadn’t reached out to certain teams when she had or claiming that their A.I. models catch more misinformation than they really do. Even executives at the top of the company waited.

Advertisement
Advertisement
Advertisement

S3: The CTO of Facebook responded to my story, which I would consider an official company response, saying, I fear that this story is now going to dissuade people from working on Airbus has this talking point that he always does. And he said it to me in an interview. He said to our editor in chief at an event when we hosted him.

S2: I’ve learned the hard way that we need to think more responsibly about our technologies before they’re ever deployed into the world, and that’s what the responsibility was created to do. And that’s why we’re doing such amazing work on a bias. And someone tweeted like a reader tweeted yesterday. How weird is it that your piece said that Facebook is using by as sort of as a fig leaf to cover up the fact that they’re not doing any other responsible stuff? And then in response to that, the CTO says, hey, look, we’re doing a bias.

S7: I was like, yeah, that really, really puzzles me. I’m like, well, the things that you could have, that’s a real zinger.

S1: Your editor described them sort of following a playbook to try to discredit your reporting. And I wonder what that says about Facebook’s focus on its own image as a company.

S3: Facebook has this interesting thing that it does, especially when it talks, when Mark Zuckerberg goes to testify in Congress, it will create this narrative of you don’t actually understand the technology, we understand the technology.

S2: We’re the ones that built it. And therefore, we’re the only ones that can fix the problem. And I think what happened in my case is they couldn’t make that argument. I work for at MIT Technology Review. I graduated from MIT. I have an engineering degree. I’ve been covering I for two and a half years and specifically covering a bias for two and a half years and. I have a very, very good understanding of the technology, and you gave me all this access to actually examine your technology and what is being done within the company.

Advertisement
Advertisement
Advertisement

S1: And so Karen said Facebook tried a different tack.

S2: They said maybe you don’t understand the company. Hmm. You want to talk to the responsibility. You didn’t talk to the integrity team, which works on this information, so you don’t have the full picture. First of all, I talked with people on the integrity team, just not through Facebook’s front doors. I have sources who worked on or with the integrity teams that described to me very clearly how the integrity teams work. I also said multiple times to Facebook, asked them three times to describe to me what the integrity teams do and whether or not there’s any team, let alone the integrity team, that works in a centralized, prioritized, coordinated way on the study of how recommendation systems amplify misinformation. And they turned up empty. So these kind of like tactics that Facebook tries to engage in, where it’s always trying to say, you don’t actually know enough, we know better because we’re the ones on the inside with the full picture. To me, it just suggests that, like, they do that purposely to confuse people and delay any kind of regulation externally.

S1: Do you think Facebook can ever really do responsible, applied A.I., or will its business model always get in the way?

S3: Someone said an interesting idea to me today, which is like, what if we just capped Facebook’s growth like it was only allowed to have X number of users because then the incentive to grow is sort of like already capped. But then you bring this up, this interesting point of but then you can have a certain number of users and still try to like milk all the money out of them.

Advertisement
Advertisement
Advertisement

S2: My sense, at least from the way that people talk to me about how Mark Zuckerberg is this person and many other reports that have been written about, like him personally, is that he actually cares a lot more about growth than about money.

S5: Perhaps there’s a world in which he won’t go chasing for more and more money out of these limited number of users. But it’s really hard to say, and I think you’re right, that both the growth and the business incentives point in the same direction of constantly wanting users to be increasingly hooked to their platform.

S8: Karen Howe, thank you very much. Thank you so much, Lizzie. Karen Howe is a senior reporter at MIT Tech Review. All right. That is it for us today. TBD is produced by Ethan Brooks and edited by Allison Benedikt and Torie Bosch. Alicia Montgomery is the executive producer for Slate podcasts. TBD is part of the larger What Next family. And it’s also part of Future Tense, a partnership of Slate, Arizona State University and New America. And I want to recommend you go back and listen to Tuesday’s What Next? It’s part of our series, about a Year of covid. It’s a deeply personal story of loss from our boss, Alicia Montgomery, and her family. And, you know, it it made me cry. OK, Mary Harris will be back on Monday. Have a good weekend. I’m Lizzie O’Leary. Thanks for listening.