Who’s Afraid of A.I.?
Speaker A: Over the past few weeks, you might have noticed some very public hand wringing about artificial intelligence, a lot of it from people who had made AI their life’s work fantasy.
Speaker B: The so called godfather of AI is warning that an end to humanity is a real risk.
Speaker B: And he’s just quit his job at.
Speaker A: Google to warn us.
Speaker B: We need to think really hard now.
Speaker A: About how we’re going to control something.
Speaker B: That’S more intelligent than us.
Speaker A: That’s Jeffrey Hinton, an AI pioneer who’s been on a bit of a media tour worrying about AI since he left Google.
Speaker A: But it’s not just him.
Speaker A: There was a public letter from Elon Musk and others calling for a pause in AI development.
Speaker A: An essay in time from theorist Eliezer Yudkowski saying generative AI, the kind of tech that includes chat GPT, can harm humanity or even end it.
Speaker A: Suddenly, AI doomerism feels like it’s everywhere.
Speaker A: And as I’ve watched these warnings spread, there was one person I really wanted to talk to.
Speaker B: My name is Meredith Whitaker and I am the President of Signal.
Speaker A: Meredith is also the co founder of the AI now Institute at NYU and was one of the organizers of the 2018 Google walkout.
Speaker A: I wanted to know what she thought when she heard Jeffrey Hinton’s worries about AI.
Speaker B: One of my reactions was that it felt a bit disingenuous.
Speaker B: Not that I don’t believe that he has concerns and not that I don’t believe that our concerns in some ways overlap.
Speaker B: Right.
Speaker B: In some sense it’s positive to have people coming out and raising the alarm because there really are risks related to AI.
Speaker A: But Meredith sees those risks somewhat differently.
Speaker B: And the risks that I see related to AI are that only a handful of corporations in the world have the resources to create these large scale AI systems.
Speaker B: And corporations are driven by interest of profit and growth, not necessarily the public good.
Speaker B: And when you have these powerful tools in their hand, we should expect to see them serving those interests and not necessarily the public good.
Speaker B: And I think that the concerns that were raised by Jeff and others are, I would say, less substantiated by evidence and often looking at hypothetical future scenarios in which these statistical systems somehow become hyper intelligent.
Speaker B: And I don’t see any evidence backing those claims.
Speaker B: And it’s not that I don’t believe people are sincere in these beliefs.
Speaker B: What I am concerned about is that the sort of look over here into the vast future is playing into the hands of the corporations we need to be worried about right now.
Speaker A: Today on the show, what’s the real threat of AI?
Speaker A: Could it really kill us all?
Speaker A: Or are the risks a bit closer to earth?
Speaker A: I’m Lizzie O’Leary, and you’re listening to What Next TBD, a show about technology, power, and how the future will be determined by humans or robots.
Speaker A: Stick around.
Speaker A: As the president of Signal, the encrypted messaging platform, meredith spends a lot of time focused on privacy and the inner workings of the web.
Speaker A: When she worked at Google, she founded the company’s Open Research Group, and she’s also a founder of MLAB, an open source platform that measures Internet performance.
Speaker A: This is a long way of saying that she thinks about the internet and machine learning a lot.
Speaker A: She doesn’t particularly like the term AI.
Speaker A: She prefers ML.
Speaker A: Machine learning.
Speaker B: We used to call AI machine learning because AI is basically a marketing term that glorifies these technologies.
Speaker B: And machine learning was like where the hype was focused at that moment.
Speaker A: That moment being 2013, 2014, when she saw what she calls the first round of AI hype at Google.
Speaker B: What I realized and what really alarmed me then and continues to alarm me, is that what we’re calling machine learning or artificial intelligence, is basically statistical systems that make predictions based on large amounts of data.
Speaker B: So in the case of the companies we’re talking about, we’re talking about data that was gathered through surveillance or some variance of the surveillance business model that is then used to train these systems that are then being claimed to be sort of intelligent or capable of making significant decisions that shape our lives and opportunities.
Speaker B: Even though this data is often very flimsy.
Speaker A: The data that feeds these systems is from all over the web, gathered by crawling millions of websites and it’s everything from news sites to hate speech.
Speaker B: And it is being sort of wrapped up into these machine learning models that are then being used in very sensitive ways with very little accountability, almost no testing, and backed by extremely exaggerated claims that are effectively marketing for the companies that stand a profit for them.
Speaker A: You also work with the AI Now Institute, and I think something that is important to note is a line from you, I guess, that says nothing about artificial intelligence is inevitable.
Speaker A: And I guess that feels like it runs counter to a lot of the prevailing current sentiment.
Speaker A: Where I opened up my inbox this morning and I had four different PR pitches that included the term AI.
Speaker A: I wonder if you could expand on that a little bit.
Speaker A: This idea that nothing is inevitable about.
Speaker B: This, I think we have to recognize that the tech industry as it is, is very, very recent.
Speaker B: Most of us old millennials have some living memories from before the Internet was what it was, certainly before we had mobile phones that were part of our prosthetic brains, right?
Speaker B: So this has happened really, really quickly.
Speaker B: I don’t think we need to accept it as inevitable.
Speaker B: I think it’s very historically contingent.
Speaker B: And I think in general, part of the narrative of inevitability has been built through a kind of sleight of hand that for many years has conflated the products that are being created by these corporations.
Speaker B: So email, blogging, search with scientific progress and the message implicitly or explicitly has been do not put your finger on the scales of progress.
Speaker B: Don’t regulate it, don’t question it, don’t voice concerns about it.
Speaker B: Because if you do that, you’re going to be sort of messing with kind of the arc of science, the evolution of human knowledge, whatever the kind of grandiose framing is.
Speaker B: Instead, let the technologists do the technology, you do what you do and we’ll all benefit from progress being made by these companies.
Speaker B: And I think for a long time that staved off regulation, that intimidated people who didn’t have computer science degrees because they didn’t want to look stupid.
Speaker B: And frankly, that led us in a large part to where we are.
Speaker B: Where we are in a world where private corporations who in some ways have more power than states have huge dossiers, unfathomably complex and detailed dossiers about billions and billions of people and increasingly provide the infrastructures for our social and economic institutions, whether that be providing so called AI models that are outsourcing decision making or whether that be providing cloud support that is ultimately placing incredibly sensitive information again in the hands of a handful of corporations that are centralizing these functions with very little transparency and almost no accountability.
Speaker B: So I think that is not an inevitable situation.
Speaker B: That’s a situation we know who the actors are, we know where they live.
Speaker B: We have some sense of what interventions could be healthy for moving the situation we’re in towards something that is more supportive of the public good.
Speaker A: What do you make of this moment then?
Speaker A: Where, on the one hand, you have a number of consumer facing generative AI products chat, GPT, Dolly, stable diffusion, et cetera circulating among the general public and at the same time, people who are, to some degree eminent pioneers in working with neural nets or large language models saying, hey, maybe this stuff was a mistake.
Speaker A: Why are those two things happening at the same time?
Speaker B: So if you think about sort of past AI milestones in quotes, we can think about something like AlphaGo when DeepMinds AI beat the complex game of Go and there were the headlines, changed the rules, AI is advancing, et cetera.
Speaker B: But there’s a real sense in which we kind of had to trust that, right?
Speaker B: An expert says beating Go is a significant step in AI.
Speaker B: We can kind of rationally understand that and we’re trusting that this really is a milestone moving forward.
Speaker B: Chat GPT, in a sense, gave us each sort of a window into the fact that, yes, these systems are astoundingly quick and pretty good at producing plausibly shaped responses to prompts, right?
Speaker B: And I think in part that fueled a larger public conversation where people were asking questions about this, where there was a lot of hype.
Speaker B: People were kind of able to roar shock like, project all sorts of human and inhuman qualities onto these systems because they were sort of simulating an interaction that we’re all so deeply wired to respond to.
Speaker B: So I think in part that there is a real moment where if we didn’t want to anthropomorphize these systems, which, again, there’s no evidence they’re sentient.
Speaker B: There’s no evidence for a spark of consciousness, but it actually takes a bit of resistance to not anthropomorphize them because the interaction we are performing with them is one we are primarily familiar with having with real human beings.
Speaker B: The same responses in us.
Speaker B: Are being triggered by this megachat bot that are triggered by texting with a friend.
Speaker B: And so I think there’s something fairly deep to be examined about the human responses that are being played on by these systems, particularly Chat GPT, and then the way that provoked a kind of crescendo of speculation around future.
Speaker B: Risks and concern that was largely centered on this ill defined, anthropomorphized version of artificial intelligence that is, in my view, rooted more in that human reaction than it is in any evidence that the datas and servers and human labor required to create these systems is ever going to itself become sentient.
Speaker B: Humans are obviously sentient when they contribute this labor, but insofar as those are all the ingredients that are required to create and maintain these systems, that combination in itself is not a sentient being that has been brought into the world by superhuman Dr.
Speaker B: Frankenstein.
Speaker A: Well, where is a better place then, do you think, to put that focus?
Speaker A: If it’s not on having these sort of Dr.
Speaker A: Frankenstein feelings?
Speaker A: Is it thinking about the data that is going into large language models and the biases that can come along with that data?
Speaker A: You said you didn’t dismiss Hinton’s concerns out of hand, so I’m wondering where your concerns lie if they lie in a slightly different place.
Speaker B: There are many concerns we have to hold at once.
Speaker B: This isn’t a zero sum game.
Speaker B: And, of course, data bias and the fact that these systems will be shaped like the data they are informed by is a big one.
Speaker B: Right?
Speaker B: And Natasha Tiku at The Washington Post did a really brilliant exposition looking at what actually goes in to creating Chat GPT, right?
Speaker B: Where does it learn how to predict the next word in a sentence based on how many billions of sentences it’s been shown?
Speaker B: Where does that come from, right?
Speaker B: And it showed some gnarly things like neo N*** content is in there, deeply misogynist content and gnarly, and not surprising because we all know the Internet, right?
Speaker B: And that’s where this comes from.
Speaker B: That comes from sort of platforms and surveillance that has been enabled by the commercialization of the Internet.
Speaker B: Data is a big concern.
Speaker B: And again, going back to what we’re talking about with Measurement Lab and the subjectivity of data, who gets to author data, right?
Speaker B: Who gets to determine what it means?
Speaker B: And how is that shaping an implicit worldview that is then parroted back through these AI systems?
Speaker B: I think for me, there’s a big concern also about just who gets to use these systems, who benefits from them and who is harmed by them because.
Speaker A: They require so much expensive computing power and so much data.
Speaker A: I mean, that sort of automatically says they can only exist in the hands of either very wealthy corporations or very wealthy individuals.
Speaker B: Yeah, absolutely.
Speaker B: And what is the business model for that?
Speaker B: Right, I want to point to something that’s giving me a lot of hope right now, which is the Writers Guild of America.
Speaker B: Strike and the Writers Guild of America are striking because their working conditions have been degraded pretty significantly over the last number of years.
Speaker B: And one of the demands that they are making on the studios is that they want the control to decide whether AI is used at all in their creative process and if so, how.
Speaker B: Now that is a really powerful form of what I would call kind of regulation from below that is just joining together in the classic labor organizing and saying we’re not going to work until we have working conditions that support us and sustain us.
Speaker B: And part of that is pushing back on the use of these technologies by studio heads and others who want to extract more from us while paying us less and saying, no, we’re not going to allow that technology to be used as a pretext for degrading our skill and our work.
Speaker A: Meredith argues that having generative AI out in the world now is less about you and me getting to do cool things with chat GPT and more about big companies bottom lines.
Speaker B: It costs billions of dollars to create and maintain these systems head to tail.
Speaker B: And there isn’t a business model in simply making chat GPT available for everyone equally, right?
Speaker B: Chat GPT is an advertisement for Microsoft.
Speaker B: It’s an advertisement that is telling folks like the studio heads, like the military, like others who might want to actually license this technology via Microsoft cloud services, hey, look, this works.
Speaker B: This can do interesting, neat things as long as you don’t care about the veracity of the content and hey, you should sign up for a license.
Speaker B: Right?
Speaker B: So we already know who’s going to be able to actually use this ultimately, who the business model will target.
Speaker B: And it’s not technology distributed democratically that the entrepreneurs and the people with a good grind set will be able to apply.
Speaker B: It is going to follow the current matrix of inequality in our world as it is shaped.
Speaker B: Now.
Speaker A: When we come back, the Skynet question.
Speaker A: I want to talk about what I have seen as I guess maybe a bifurcation of the the criticism of generative AI that seems to be popping up.
Speaker A: On the one hand, I feel like you have hinton, Yitkowski, et cetera, saying like there is an existential threat here and on the other people like Tim Nickebrew, Debraji Joy, Bolawini, perhaps you saying the the issue here is as much in how these things are built and trained as anything else.
Speaker A: And yet I saw Jeff Hiton call your concerns less existential in a CNN interview, and I wonder what you make of that, because it seems like there are sort of these two different camps in thinking about how these models are disseminated into the wild and what kind of harms they might do.
Speaker B: Yeah, I think that is more or less correct.
Speaker B: And within those camps, of course, there are many small differences.
Speaker B: And people seriously wrestling with analysis and trying to really get our heads around something that is extremely complex, that needs to take account of the political economy.
Speaker B: Who’s controlling these and how are they likely to be used, the construction of the technologies themselves, what data do they use?
Speaker B: How are they trained?
Speaker B: What does that tell us about what their capabilities and harms will be?
Speaker B: And then a set that I would say is focused on theoretical, hypothetical, long term risks, which is the existential risk is often used to mean the risk of eliminating all of humanity.
Speaker A: The skynet.
Speaker B: Yeah, the skynet risk.
Speaker B: Right.
Speaker B: And even if I recall the movies correctly, there were still humans where skynet was.
Speaker B: So is that existential?
Speaker B: I don’t know.
Speaker B: Right.
Speaker A: It was not great for those humans.
Speaker B: Yeah.
Speaker B: No, it looked pretty dark.
Speaker B: Nonetheless, what do we mean by existential?
Speaker B: Becomes kind of the crux of this argument.
Speaker B: And my concern with some of the arguments that are so called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently are in fact threatened before we consider a risk big enough to care about.
Speaker B: Right.
Speaker B: Because right now, low wage workers, people who are historically marginalized, black people, women, disabled people, et cetera, those people in countries that are on the cusp of climate catastrophe, many folks are at risk.
Speaker B: Right.
Speaker B: Their existence.
Speaker B: The term existential means something like concerned with existence.
Speaker B: Their existence is threatened or otherwise shaped and harmed by the deployment of these systems.
Speaker B: And we can look at these systems used in law enforcement.
Speaker B: There’s a New York Times story from a few months back about a man who was imprisoned based on a false facial recognition match right.
Speaker B: That is deeply existential for that person’s life.
Speaker B: That person was black.
Speaker B: And we know that these systems, people like Deb Joy, Tim, Neat, have documented over and over again that these systems are more likely to misrepresent black, misrecognize black people.
Speaker B: And in a world where black people are more criminalized and there is inequality in law enforcement that is going to have harms.
Speaker B: Right.
Speaker B: So my concern is that if we wait for an existential threat that also includes the most privileged person in the entire world, we are implicitly saying, maybe not out loud, but the structure of that argument is that the threats to people who are minoritized in harms now don’t matter until they matter for that most privileged person in the world.
Speaker B: And I think that’s another way of kind of sitting on our hands while these harms play out.
Speaker B: That is my core concern with the focus on this sort of long term instead of the focus on the short term.
Speaker A: So then what is the next step?
Speaker A: Is it shut it all down?
Speaker B: The next step are, in my view, things like the Writers Guild of America winning showing that we can put clear guardrails on the use of these systems.
Speaker B: And those guardrails don’t have to come from entreating, those who already have power.
Speaker B: They can actually come from power building in workplaces and in communities.
Speaker B: I think we also have some interesting proposals for more grounded regulation.
Speaker B: I would look at Lena Khan’s New York Times op ed recently that calls for structural separation of these companies.
Speaker B: I would also look to the really grounded proposals that Amba and Sarah at the AI Now Institute put out in their 2023 Landscape Report, particularly the proposal that looks at privacy legislation as something that could be beneficial in stopping some of the data centric AI development.
Speaker B: Right.
Speaker B: Because of course, we have to get back to this sort of kind of core reality that AI is built on surveillance.
Speaker B: It is a product of the surveillance business model.
Speaker B: It is, in a sense, like one more way that these companies can make use of the data they created and collected in service of targeting ads, but now that can be in service of training AI models that expand their market reach and profitability.
Speaker B: So AI entrenches this surveillance business model.
Speaker A: I was going to ask you about that because I feel like this is a place where your day job as president of Signal kind of meshes with this concern.
Speaker B: Right.
Speaker A: We have started to see Reddit and others say, okay, if you’re going to use our API to train your large language model, you got to pay for it.
Speaker A: But then it makes me wonder, okay, well, what happens to the data of a regular citizen?
Speaker A: How do you know if your Reddit post or whatever any other piece of data is being trained here on?
Speaker A: Are there ways to feel safer around that?
Speaker B: If you’re concerned, the concerns that animate me at Signal and my concerns around AI, they’re inextricable.
Speaker B: Right.
Speaker B: And I see.
Speaker B: I took the job at Signal happily because I saw it as a way where I could protect this incredibly important core infrastructure for truly private communications in a world that is increasingly riddled with mass surveillance.
Speaker B: And that Signal was proving that we could do tech outside of this surveillance business model, that it’s actually a little safe haven from the indiscriminate collection and use of all of this data.
Speaker B: That does feed into the AI models, that does feed into creating models of reality that affect our lives in ways that we often don’t know and don’t have control over.
Speaker B: So in a world where AI continues at the pace we’re seeing with the voracious appetite for data, we are seeing privacy eroded.
Speaker B: We can think of a kind of never ending story metaphor, right?
Speaker B: And in a world where we see the model that Signal is showing possible that the model that Signal has created that is supporting millions and millions of people around the world to use technology in ways outside of the surveillance business model, we see privacy supported.
Speaker B: And my view is we see rights supported, and we see a world that has a lot more potential to create livable conditions that benefit all.
Speaker B: So these two things are very closely connected to me, and that’s why I’m here talking about them, because I don’t think we get to the world that I believe in and that I work for every day at Signal.
Speaker B: Without naming what is happening in the dominant tech industry and how dangerous it is to some of these core values.
Speaker A: Where is your hope for rating that in a policy realm?
Speaker A: I mean, I know you have worked with Lena Khan.
Speaker A: You mentioned Lena Khan and the FTC.
Speaker A: Do you see that coming from her, from that agency?
Speaker A: Because I certainly don’t see Congress doing anything.
Speaker B: It is complicated.
Speaker B: I think I was heartened to see her op ed, and I am also watching intently on some of the things that are happening in Europe around the AI Act.
Speaker B: I guess when I talk about the policy realm, I think we can’t see legislation, regulation, policy making as disconnected from the rest of it.
Speaker B: Right?
Speaker B: We know that these companies spend hundreds of millions of dollars lobbying.
Speaker B: We know that they spend a lot of money supporting astroturf organizations that they can proxy their views through.
Speaker B: We know that in the US.
Speaker B: In a post Citizens United world, it is very hard to get elected without a huge amount of money.
Speaker B: And that money can be ultimately secretly contributed.
Speaker B: Right?
Speaker B: So we’re in an ecosystem where policy doesn’t just bring de novo from Zeus’s forehead.
Speaker B: And it’s not Athena.
Speaker B: Yeah, right.
Speaker B: It is not Athena.
Speaker B: It’s not made based on a kind of dispassionate examination of ideas.
Speaker B: Right?
Speaker B: There is a huge amount of influence that goes into shaping policy.
Speaker B: And there are folks like Lena, there are folks who are really taking this seriously, but that doesn’t mean there aren’t fierce counter pressures.
Speaker B: The point I’m making here is that I think we still need that fierce counter pressure.
Speaker B: We need people on the ground saying, no, we don’t want facial recognition in our community.
Speaker B: We need people lobbying for privacy.
Speaker B: We need the California privacy law to be kind of proven and to set the benchmark.
Speaker B: We need all of these things at once to make it ultimately more painful for those with the power to make regulation and policy to not check the unaccountable development and deployment of these technologies than it is to check them.
Speaker B: We have to recognize that we have a lot of competition in applying that pressure.
Speaker A: So how do you want someone like my mother, smart person, doesn’t know that much about this stuff, to think about what feels like a sea of AI headlines around them these days?
Speaker B: Well, I think to know that they’re not alone and being overwhelmed, it’s really confusing.
Speaker B: There are so many claims in the headlines about what these things do and don’t do.
Speaker B: And I think if there’s one thing I would say, just keep an eye on who benefits and who is likely to be harmed, right?
Speaker B: When you see a headline about OpenAI, you need to always recognize that that’s talking about Microsoft.
Speaker B: When you see a headline that’s about AI, you need to remember that there are only a handful of entities in the world.
Speaker B: These are corporations based in China or the US.
Speaker B: That have the resources to make AI and to remember that AI is not magic.
Speaker B: It is based on concentrated resources, concentrated computational power, concentrated data resources that are generated via surveillance.
Speaker B: And the concentrated power of these companies.
Speaker B: Again, we know where they live, we know where their data centers are, and it is eminently possible to put these technologies in check if there’s a will.
Speaker B: So it’s not out of control, it is not out of our hands.
Speaker B: And you don’t have to be a computer scientist to be able to have an informed opinion about how these are used, who gets to use them, and to what end.
Speaker A: Meredith Whitaker, thank you very much for your time.
Speaker B: Thank you so much.
Speaker B: This has been wonderful.
Speaker A: Meredith Whitaker is the president of Signal.
Speaker A: And that is it for our show today.
Speaker A: What Next?
Speaker A: TBD is produced by Evan Campbell.
Speaker A: Our show is edited by Jonathan Fisher.
Speaker A: Alicia Montgomery is vice president of Audio for Slate.
Speaker A: TBD is part of the larger What Next family.
Speaker A: And TBD is also part of Future Tense, a partnership of Slate, Arizona State University and New America.
Speaker A: And if you’re a fan of the show, I have a request for you become a Slate Plus member.
Speaker A: Just head on over to Slate.com Slash.
Speaker A: What Nextplus?
Speaker A: To sign up.
Speaker A: All right, we’ll be back on Sunday with another episode.
Speaker A: I’m Lizzie O’Leary.
Speaker A: Thanks for listening.