The Philosopher With Silicon Valley’s Ear

Listen to this episode

Lizzie O’Leary: So I would love it if you guys could introduce yourselves. Will. I’ll start with you. Who are you? Why are you here?

Speaker 2: Sure. I’m well, MacAskill. I’m an associate professor of philosophy at Oxford University. I helped to set up what’s known as the Effective Wisdom community, a community of people who try to do as much good as they can with at least a significant part of their time and their money. I also recently wrote a book called What We Owe the Future. That’s, I think, going to be the topic of conversation in the course of this podcast.

Advertisement

Lizzie O’Leary: Bob, Who are you?

Robert Wright: I’m Robert Wright. I’m mainly a journalist. I’ve written some books, including a book called Nonzero, which may be the one most relevant to this discussion. I also publish a newsletter called Nonzero and have a podcast called nine Zero. And and in fact, I’ve written a little about Will’s ideas in my newsletter, in the Nonzero newsletter.

Lizzie O’Leary: I asked Bob and Wil to come on the show because Will’s new book, What We Owe the Future, is something you’re likely to see on bookshelves in Silicon Valley. The book is a case that we, the people of today, need to make the people of tomorrow and many, many tomorrows from now. One of our top priorities. The idea is already beginning to reshape the world of philanthropy. Will’s philosophy, known as Longtermism, is hot among tech types, as is effective altruism, the movement he helped found. Elon Musk tweeted about this book. This is a close match for my philosophy, and Bob has written some very thoughtful critiques of Longtermism.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

Lizzie O’Leary: So today on the show, we’re going to do things a little differently. We’ll have a wide ranging conversation about an increasingly influential worldview and where it sometimes falls short. This is a little longer than our usual episodes, which I guess is appropriate. I’m Lizzie O’Leary and you’re listening to What Next? TBD a show about technology power and how the future will be determined. Stick around.

Speaker 4: Uh. Uh. Uh. Uh. Uh. Uh.

Lizzie O’Leary: The elevator pitch for Longtermism is contained in three sentences in Will’s book, Future People Matter. There could be a lot of them, and we should do what we can to make their lives better. And I’ve noticed that you have been talking about the book sort of, I don’t know, maybe scaled back your description of this being the moral issue of our time to a moral issue of our time. I wonder if you could tell me how you approach those three sentences.

Advertisement

Speaker 2: Sure. So Longtermism is the view that we should be doing much more than we currently are to protect the lives of future generations. And you’re right that positively impacting the long term is a key moral clarity. It’s something that we should think of as one of the key challenges that we face in the world today. And as you said, I got, you know, round to this idea by taking seriously the interests of future generations. Taking seriously the sheer scale of the future where perhaps we’ll cause our own extinction in this century or the next, but perhaps not.

Speaker 2: And if not, then we might have hundreds of thousands of years ahead of us if we last as long as a typical mammal species. Hundreds of millions of years. While the earth is habitable and potentially billions of years if we managed to extend us a lifespan beyond that point too. And yet there are just things that will impact not just the present generation, but many generations to come. The risk of all that nuclear war keeps me up at night. The risk of that really far worse than COVID 19.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

Speaker 2: Absolutely. I think there’s you know, my personal best guess is that there’s something like a one in three chance that we’ll see nuclear weapons used in a war situation situation in my lifetime. And that’s terrifying. Similarly, engineered viruses, weapons of mass destruction coming from biological weapons. Similarly, developments in AI. I think there’s just major challenges that we face that are enormously worrying from a near-term perspective and a long term perspective.

Lizzie O’Leary: Bob, The language that we’ll use is, of course, seems utterly morally laudable, and yet you sort of have this crystallized view that you in your in your newsletter called Short Termism. So give me the counter.

Advertisement

Robert Wright: Well, first, there’s a lot I like about Longtermism. I accept the foundational premise that the lives of future people have as much moral value as my life, and I like the fact that it draws our attention to so-called existential risks. In other words, it reminds people that the problem with, say, a nuclear war that wipes out all of life on Earth would be not just that it was bad for us, but it was bad for a bunch of yet unborn people. So it’s even more imperative to try to avoid it now. Technically, as Will himself has said, most of the things called existential risks would have a very low probability of wiping out all humans. But. But that aside, I certainly think not nearly enough attention is paid to all kinds of problems of great magnitude that could have very, very severe consequences.

Advertisement

Robert Wright: The issue I have with Longtermism is whether it is the. For most people, the most effective kind of rhetorical pitch for drawing their attention to the things that I’d like to see their attention drawn to. And I would, you know, my own my own preference is to emphasize something that I call short termism. But I would emphasize, I don’t mean by short termism, you know, succumb to the temptation to buy the donuts today.

Advertisement
Advertisement
Advertisement
Advertisement

Robert Wright: I have a very expansive definition of short termism. I mean, you know, people who are concerned about them, their children, their nieces, their nephews, I would call that short termism because I think that’s what’s kind of natural. That’s kind of the default human concern. You don’t have to work very hard to get people to care about how life is going to be for their children.

Advertisement

Robert Wright: So I think a lot of people naturally have a kind of a horizon of moral concern that extends for several decades. And it seems to me that we have failed to tap into even that, right? We have failed to get people to focus on various things that should be of great concern to them if they care about them and their children.

Robert Wright: And it just seems to me, if we can’t do that, hey, what are the chances that we can we can sell them on the importance of these problems by talking about, you know, some guy who had jet pack to work in 150 years and how tragic it would be if he wound up not existing. Secondly, I worry that focusing on the guy with the jetpack will make this seem just so, so kind of conjectural hypothetical. It’s easy to dismiss and will make the whole the whole enterprise seem almost to sci fi ish.

Advertisement

Lizzie O’Leary: Before Longtermism Will was one of the originators of the effective altruism movement or AEA is backers say its central tenet is to find the best ways to help other people and then put them into practice. That means quantifying and maximizing the reach of a financial donation. The classic early example of the movement is buying mosquito nets. A low dollar investment for the donor with the high payoff of potentially preventing malaria for the recipient.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: The movement also encouraged people to earn to give embracing the high salaries of Wall Street or corporate America so that they had more money to distribute. Longtermism might be thought of as EA’s second cousin. Make a small investment now in thinking about future humans and the world they’ll inherit with the chance of a big payoff for them. But doing this necessarily requires a degree of abstraction from the present day, and it means making some tradeoffs.

Advertisement

Lizzie O’Leary: Well. Well, I think there is something in that that is, I think, worth engaging. Right. Like the idea that this is a maybe a bloodless framework. Right. That like the most empathy generating experience of my own existence is parenthood. And I wonder how you capture someone and make them care when we are constantly surrounded by evidence that people don’t care very much, Can’t see past next week?

Speaker 2: Well, I think two things. So where I’d strongly agree with Barb is just I think for many of the things I talk about, like these worst case pandemics, like risk of nuclear war, there is this very strong short term. Argument, too. I mean, it’s just this astonishing fact that the U.S. government spent $5 trillion on responses to the COVID 19 pandemic and yet only $5 billion on preparing so that we don’t have another pandemic in the future, a thousand times less. It’s an outrage, in fact, and a sane world, even a world saying, well, they didn’t care at all about what happens past our grandchildren’s generation. Even past our own generation would be doing radically more to guard against some of these risks.

Advertisement

Speaker 2: And I take it on board that this, you know, what I’m doing is, you know, I’m giving arguments. I’m saying and I’m is arguments for something that’s hard where future people like the reason we’re not paying attention to their interests precisely, is because it’s hard to empathize with people that I will never meet. And I think that’s why the interests of future generations are so neglected. They can’t vote, they can’t lobby, they can’t tweet tatis. And so what are the solutions? Well, one is just giving arguments that appeals to me. That moved me.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: But you’re not just a philosopher. You’re also an activist.

Speaker 2: Exactly. And actually, I think you’d be surprised by just how many people just are kind of mildly concerned and are willing to reflect on their own values to think, look, it just is inconsistent in just the same way I should care about someone wherever they live on the earth, wherever they are in space, I should care about someone wherever they are on time. And if there are things that are going to impact future generations, that really is very important. And I think we’ve seen this in the many thousands of people now who are using some of that income to fund nonprofits that are positively impacting the long term future or switching their careers. I also think we saw it well before the rise of effective altruism with the environmentalist and climate movements, which were fundamentally premised on the idea that we want to ensure that future generations are no worse off than the present generation are.

Lizzie O’Leary: There is a framework in your book? Well, that I think I would love to hear you guys talk about this, this significance, persistence, contingency. Right. The sort of how you examine an action, an idea. And I wonder if you could walk me through that and talk about how it applies practically.

Speaker 2: Sure. So this is the many challenges and problems that the world faces. And if we’re taking a long term view, how should we think like which are the most important, not necessarily the ones we should focus on, Because you also need to think about how much other people are focusing on them. And I break that down into, as you say, significance, persistence and contingency, where the significance of an event is, how big a difference it makes any time to how good or bad the world is. Persistence is how long lasting it lasts, for how long it lasts for, and contingency is whether that event would have happened otherwise.

Advertisement
Advertisement
Advertisement
Advertisement

Speaker 2: So take the loss of some species, for example. Well, you can think about, you know, how much better was the world in virtue of having that particular beautiful creature. You can think about persistence. So if we lose a species, how long, you know, do we get that back at some point? And I think the answer is no. I think the loss of a species is something that’s irrevocable. It’s actually has an impact that persists indefinitely.

Speaker 2: And then the final question is contingency, where perhaps you kind of managed to save a species from extinction, but it would go extinct just a few years later anyway, by forces outside your control, in which case then the impact you’ve had is only lasting for that few years.

Speaker 2: And I think you can apply that framework more generally where we want to be looking for those things that are significant in the sense that really make a big difference to the value of the world any one time persistent in the sense that the change you’re making will last for an extremely long time and contingent in the sense that the change you’re making wouldn’t simply have happened otherwise just a few years later.

Lizzie O’Leary: That feels like an awful lot of steps to ask an average person to engage in.

Lizzie O’Leary: Bob, how do you think about that framework?

Robert Wright: I accept the framework. In principle, I think it makes sense. But if we focus on the persistence part, I guess I’m asking the question of. How important is it? To convince people that the persistence of something goes beyond 50 years. Say, and let me give you a practical example. People are finally at the moment because of some unfortunate geopolitical realities. Thinking more about the actual danger of nuclear war. You know, that’s you know, in a sense, I guess silver lining to the war is that is that it can get people focused on grim things that do deserve their attention. But here’s something that I think they’re still not thinking about at all that is relevant to nuclear war.

Advertisement
Advertisement
Advertisement

Robert Wright: Okay. The possibility of a of a weapon, a space weapons race is something that gets basically zero attention. Nobody nobody talks about the fact that China, Russia and the U.S. have tested anti-satellite weapons and there’s no treaty in place to prevent that. And when you look at the implications of a world in which countries do have vast arsenals of anti-satellite weapons, you’re talking about a world where one nuclear power could cripple the the the outer space surveillance capabilities of another great power, which would give it an it’s your trigger finger, you know, like on the nuclear button, because suddenly it would be blinded.

Robert Wright: And in fact, even the fact that an adversary has the capacity to blind your surveillance infrastructure in outer space could give you and it’s your trigger finger to say nothing of what the consequences would be of smashing up a bunch of satellites and having all this debris floating around that destroyed a bunch of other stuff in outer space that we depend on.

Robert Wright: Right. That’s that’s getting no attention at the moment. And in principle, it should, because we’re talking about relatively near-term things. You know, these satellite these weapons are being tested now. Now is the time to stop the arms race. This is something that could be devastating for the planet in 20 or 30 or 40 years when the people are trying to convince about the importance of this will still be alive, probably to say nothing of their children and grandchildren as a matter of moral philosophy, the exact extent of the persistence of the damage does matter.

Robert Wright: But in terms of the rhetorical leverage we have, I don’t think it adds much. And and and it may kind of be an unhelpful distraction in a certain sense. And job one is getting people to focus on the things that they kind of naturally focus on. Right. The welfare of them and their kids. If we can’t do that, we are in deep, deep trouble. And I just worry that Longtermism may not help at that point.

Advertisement
Advertisement
Advertisement

Speaker 2: Yeah, I think I’ve got a few responses. So I mean, and again, agreeing that a sane world would be doing radically more about these global catastrophic risks, even if we just had a 20 year time horizon. I am trying to get people to think about doing the right things, but also kind of fundamentally for the right reasons, where I worry that if you look at the history of other kind of activist movements, perhaps they focus on some particular issue and, you know, promote that, but not for the reasons they fundamentally believe.

Speaker 2: So perhaps I’m saying, okay, the case for being vegetarian is stronger if I focus on the environmentalist aspect rather than the animal welfare aspect. And but perhaps what that means is people switch from eating beef to eating chicken. That is good from an environmentalist perspective, but extremely bad, in my view, from an animal welfare perspective, where that wouldn’t have happened if you’d got people to care about animal welfare, perhaps, as well as the environment in the first place. And so similarly, if I’m trying to influence, you know, by hard working, morally motivated people, I really want them to have the correct kind of overall understanding of the world where in some cases reducing the risk of nuclear war. This is just a no brainer. Take any moral perspective you like. That’s going to be good in the case of some other things.

Speaker 2: So take development of artificial intelligence and very advanced artificial intelligence. How fast should we go there? Well, if I was only focused in the very near term, perhaps the case is just speed up as much as possible. We can have a world of abundance. Whereas if I’m taking a longer term perspective, I get increasingly more concerned about what might be lower probability. Perhaps it’s only five 10%, but would be absolutely catastrophic and catastrophic over very long timescales. Where that might mean stable totalitarianism, that might mean loss of control to A.I. systems themselves. And so my worry, you know, I feel a lot of burden. No, wait. As someone kind of promoting ideas, I think I want to try and convey what I see is the true kind of underlying moral worldview so that people can take that worldview and then figure it out kind of what best follows from it, rather than just focusing on some kind of some narrow target.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: Part of the appeal of Longtermism is that it seems in many ways, you know, arguable, it’s difficult to say we shouldn’t do what we can for future generations. But critics argue that the concept of we is squishy and it’s built on a lot of assumptions about shared goals. If Longtermism is embraced by the wrong people in power, for example, it could have devastating effects on future happiness and well-being. Not to mention our well-being right now isn’t isn’t any imagining of a moral worldview by its nature, arrogant. Like I don’t see kind of either approach thinking about Longtermism or short term or short termism, thinking like, all right, let’s reckon with the legacy of colonialism, or let’s reckon with the idea that what we think is important might not be to someone else. How, how can we empirically know what is important to to the future of the world?

Speaker 2: So yeah, I think I push back on the idea of any model world view as arrogant. I mean, so in what we are the future. I talk a lot about the abolition of slavery. I think it’s just one of the most important moral developments in all of human history. And in the book, I really argue that it wasn’t just a matter of economic factors changing over time. It really was due to cultural change that in significant part was moved by, you know, activist activists, labels, campaigners. And it would seem a little absurd to me to say at that time, while you’re arguing for the abolition of slavery, isn’t that kind of morally arrogant or something? It seems like the way we make model progress is by people giving arguments or trying to expand empathy that people feel. And so I think it’s very important to be humble about what we do and don’t know.

Advertisement
Advertisement
Advertisement

Speaker 2: My second book, which is more of an academic book, will be led by a good four and a half people was precisely on this topic that, you know, you shouldn’t just subscribe to a single worldview and act on that. You should give significant weight to a variety of moral worldviews and take the past synthesis. But at the same time, we need to make moral progress. And that’s why having, you know, arguments and the sorts of conversation like this is, is the way to do that.

Robert Wright: I don’t think it’s arrogant to want to talk about the fate of people other than ourselves in kind of moral philosophical terms. I mean, for one thing, we can pretty much assume that they’d rather exist than not exist. And I think that’s one of the key premises of Will’s philosophy.

Robert Wright: I think you Will has called attention to a couple of points, I want to concede. Okay. So there are different audiences, you know, and and there may be people who can be persuaded and inspired by focusing on the long, long term. And I encourage Will to to keep getting his message out to them. I worry that there aren’t very many, but maybe I’m wrong. The other thing is that there are different kinds of problems, and Will is right that in principle I could be something that has consequences that are dire. Way, way out in the far term and not so much in the near term. I think that’s fair. If that’s if that’s what he said, that.

Speaker 2: Actually wasn’t quite what I was meaning. And yeah, something that I think is really correct from what Bob saying is the position long term is unfortunately it’s kind of in the name, can mislead people where it’s like, oh, you’re taking plans that only pan out over centuries or something. And I totally agree. It’s like, no, it’s about short term risks with long term consequences. Very often it’s about thinking long term and acting now. In the case of I, what I’m saying is should we bring advanced day forward and time like bad off towards it?

Advertisement
Advertisement
Advertisement

Speaker 2: Well, many of the benefits accrue in the short term and the harms are risks and not just about the short term, but also the very long term, because the benefits could be, you know, especially like I’m thinking in particular, like very influential Silicon Valley views, the benefits are like, hey, I get digital immortality, I get like I could live forever. So the risks are like, Oh, we lose control to AI systems themselves used by authoritarian governments. Those are risks that are, you know, would be a risk on me too, but also on future generations. And I think that longer term perspective might mean we become a little bit more cautious on technological development, which often has like benefits for the present to innovation, but can have is more systemic risks.

Robert Wright: I would say one thing about that, which is that I take one point that he’s talking about a kind of a. A different kind of short term, long term distinction. Something very kind of tempting in the short term is negative consequences mainly are in the long term. So within an individual life cycle, it’s like heroin or something feels great for a while. Bad long term consequences. That’s certainly true.

Robert Wright: But one thing I would say about the ice stuff is, you know, a general issue with Longtermism is how confident can we be about the implications of various things? And the A.I. scenarios. I’m still having trouble wrapping my mind around. You know, I’ve tried. I’ve heard the thought experiment where you tell the guy to make paperclips and you forget to say, Don’t make too many, and it turns the entire universe into paperclips. I’m not worried about that one now. Now there are more, you know, that may be there may be a principle there worth paying attention to. I don’t want to ridicule it, but but I do think that in some of these realms, the arguments are so kind of hypothetical and conjectural that I’m not sure how far you get even with a long term perspective.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: Effective altruism Longtermism and by extension, will himself have become increasingly influential, and Silicon Valley will has the ear of Elon Musk. In fact, his texts have shown up in the court fight between Musk and Twitter, and he’s close with the crypto billionaire Sam Bankman-fried, who has put millions of dollars into the movement.

Lizzie O’Leary: First off, it always strikes me that it’s a lot of dudes, but it can often feel like a way to justify focusing on what one wants to focus on instead of some current pressing problem like you. You could make the argument, Well, I’m worried about future runaway AI rather than thinking about the current construction of AI and the significant kind of racial discrimination that has already been built into A.I. systems right now. And the near term implications that that has. Like I wonder how you balance those approaches, particularly when you see how popular your work is will among people in Silicon Valley.

Speaker 2: So yeah, I really want to. I really think this is a misunderstanding and I really want to push against it, honestly and in the strongest terms where I mean, a couple of things. One is that, you know, so again, there are people within the tech world who’ve been influenced by these ideas, not just myself. Many people, however, like the dominant kind of if there is a kind of Silicon Valley view, it’s normally accelerationism about technology because they see, wow, all the great things that technology can bring. Whereas this is actually a counter-narrative. It’s instead saying, no technology can be very dangerous too. And yes, there’s enormous amounts of money to be made within the field of AI, but perhaps we should be slowing it down. Perhaps, in fact, we should be working on non-profit projects around technical safety.

Advertisement
Advertisement
Advertisement

Speaker 2: And I think it’s a striking thing that, you know, these people who are being convinced and like sounding the alarm bell, you know, there’s people within the machine learning community, often academics, and it’s people who are coming, you know, they didn’t have a kind of antecedent interest in AI. Sam Bankman-fried just cut his teeth as an animal welfare activist, and he went into tech in order to fund because he thought that the most pressing problems in the world just needed more funding.

Speaker 2: And then on the issue of AI itself, it really no longer feels like this distant sci fi issue where the rate of progress in AI systems is just astounding at the moment. Even the kind of boost of people who are be like, Oh, we’re going to make so much progress are being surprised. And there are even explicit forecasts where people are underestimating the scale of progress. At the moment people are just not aware of how fast progress is and they’re not aware of, you know, the risks that that can pose. It’s entirely unregulated that are out there, like very limited social like kind of norms within the companies. Like things are changing fast enough that it’s very hard to even just ethical thinking about this is kind of yet to catch up.

Speaker 2: And so the things that people say, oh, wow, this is just kind of sci fi like issues that are for the future time, that over and over and over again. What we’re seeing with the AI is that someone will say, Oh, well, A.I. systems will never show generality, and then we have the language model ghetto, or we’ll never show ability to engage in reasoning. And then you’ll see the language model Lambda, which solves this, or we’ll never be able to do Winograd Schema. It’s like use of pronouns, referring to other objects. And it’s like, again, just a leading AI systems kind of smashed through that too.

Advertisement
Advertisement
Advertisement

Speaker 2: And so for the people who are really working on and worrying about risks from extremely powerful AI systems, they think those lists are coming in the next decade or two. And it’s just that those risks. So again, it’s near-term risks with long term consequences. And it’s more that people, you know, at least from their perspective, people are drastically underestimating the pace of change.

Robert Wright: Can I say something else about the pace of change? I mean, I, I agree, first of all, that anything that is an antidote to kind of Silicon Valley boosterism about how glorious the future will inevitably be as technology evolves is welcome. And that’s good. And I worry about the pace of change to. I largely worry about it in a slightly different sense, which is just that some of the more mundane applications of digital technology, like social media, are advancing so fast that they are disturbing human social life at a pace that we’re having trouble keeping up with.

Robert Wright: So, for example, the much discussed kind of political polarization or or tribalism in America and the world. I agree with a lot of people who think that is to some extent a product of our failure to kind of yet come to terms with the way social media algorithms bring out the worst in us and and spread that around.

Robert Wright: And and but again, this is an example where, you know, when I try to to get people to focus on the problem and how it intersects with the intrinsic problem of the kind of human psychology of tribalism, the various cognitive biases that lead us to see things from a group ish perspective and warp our vision, I find it seems my best hope is to say, Hey, wouldn’t you like to straighten America out right now and not and not, you know, and not have us keep going down this slippery slope towards civil war or something worse?

Advertisement
Advertisement
Advertisement

Robert Wright: And the same with international wars. That’s the that’s the thing I find, you know, that has to the extent that anything can get rhetorical purchase, that seems to work for me. Again, that doesn’t exclude Will going around and doing his thing and appealing to the people who resonate to that and then talking about the AI problem even even though I find the long term scenarios conjectural, whatever. If we can get more people thinking about the problem of keeping the whole human project on track, that’s great.

Lizzie O’Leary: One thing I’d. I’d like to engage. With with both of you is the idea of when we’re thinking about future people or current people, the idea of a good life. Right. I’ve heard you talk about this, but like, how can we say what a good life is? I you know, I think about the Peter Singer example. I think he has perhaps rightly gotten a lot of crap from the disability community for his views on what a good life is. And I wonder if you could talk about either of you. How we balance wanting to promote the maximum benefits to humanity without straying into maybe a questionable judgment of what good and and productive and joyful is.

Speaker 2: So, you know, there are various philosophical views on what constitutes a good life or well-being. I think in practice they actually don’t matter that much because the single best guide we have to what makes someone’s life go well is their carefully considered preferences about their own life.

Speaker 2: Where the metrics if I’m thinking, okay, we’ve got all of these different ways of benefiting people either in the here and now or longer term, but let’s say it’s different medical interventions. And I’m like, okay, how bad is it to suffer from malaria versus tuberculosis? How bad is being poor versus being depressed? These are super hard questions like, I can’t do this in an armchair. So instead just go and survey people and ask people. And you know, the ideal would be that you could ask every single person all of these careful questions about what tradeoffs they would make. And often you can find surprising things.

Advertisement
Advertisement
Advertisement

Speaker 2: And this has been one of the great lessons kind of championed by kind of disability rights community, for example, is that various things that able bodied people might think if they are sitting in an armchair like pontificating about what other people’s well-being is like, is they might overestimate how bad certain sorts of disabilities are. I think people also underestimate how bad depression is. But ultimately, I think we should just ask we just ask people. So for some people, they might some people might say, I’m deaf. And that’s just actually not a hindrance to me at all. That’s just not making my life worse. Other people might say I’m deaf and that makes my life significantly worse. I wish I were able to hear. And ultimately, I think both of those things can be true. It’s just that they’re two for different people.

Speaker 2: Then when I think we’re starting to think longer term here, I mean, if if the project were to really carefully work out what people in the year 2300 would exactly want and fight to give it to them, that would obviously be hopeless. So instead, the things we’re focusing on are firstly just very basic and secondly, just providing more options for future people. So how about we have a world that isn’t a post-nuclear Armageddon? Again, kind of apocalyptic hellscape. Seems like that’s, you know, of a broad array of people who will be on board with that.

Lizzie O’Leary: But isn’t that an easy example? Like it it’s it’s easy to to to use nuclear war is the terrifying one, right? Because no one’s going to say no.

Advertisement
Advertisement
Advertisement

Speaker 2: It’s it’s easy. And where Bob and I are inviolate violent agreement is just that on the current margin given the way the world currently is look. I mean look at the people that you know these are the things we should be focusing on. So look at what the people who have been inspired by long termist ideas like what they’re funding. Huge amount is just pandemic prevention and pandemic preparedness. Again, it just feels like not a lot of people are going to be disagreeing with, you know, better masks, early detection of pandemics. There is focus on, you know, nuclear diplomacy open, you know, very fair, valid criticism.

Speaker 2: If you say, well, how much traction can we make? That’s very hard. There’s a lot of funding on this like problem of technical air safety where the key issues there are. How can you have a language model that’s not deceptive? How can you have a language model where you actually understand what’s going on under the hood? Again, you might question how much flexibility you can get there.

Speaker 2: I’m really somewhat optimistic. But again, these are things that are seeming like. You know, they seem good in the short term. They’re extremely good in the long term. I think they’re like relatively uncontroversial in terms of the impacts on people’s wellbeing in the sense that like people just in virtue of being human would would want them.

Speaker 2: And yet we as a society are doing almost nothing in them. And yet we as a society spent a thousand times less on pandemic prevention as we did on pandemic response. And yet the rhetoric around nuclear weapons is at the moment, as I understand it, in the US government, should we be building more? And so, yeah, like many of the issues in the world state, there is just overpowered arguments for making change.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: Bob, what do you. Where do you think this this question of kind of what kind of life we should be creating either for people now or for the future is is difficult. Like, where does it hit the rocks? Because I, I worried that it is full of. Implicit judgment.

Robert Wright: I think for the most part, will is right if if I understand him in saying that. Her suggesting that that question doesn’t really arise much in a direct way with most of this. In other words, what will and I are saying is life is better than not life.

Lizzie O’Leary: But I think it does when it is when it is kind of as I have watched Longtermism out in the world. So perhaps not in Will’s book, but as I’ve seen it embraced by different people.

Robert Wright: Sure. And there is the question of, you know, you know, if the I could. Put us all turn us all into the equivalent of, you know, brains in a VAT. In other words, we would all it’s like the matrix scenario. We’re all in pods and our brains, you know, they’re pumping these pleasant hallucinations into our brains. Is that a good life? That’s an important question, actually. And I think in a way, that’s one of the less farfetched scenarios if you want to look way, way out. But but so I think I think it does it does occur. But again, the most part it doesn’t.

Robert Wright: One more example is I’m talking about pandemics. It is amazing to me that for all the discussion about the possibility that this was a genetically engineered virus that was accidentally released, people are saying almost nothing about the threat of intentionally engineered bioweapons, which this definitely wasn’t. But obviously, you know, that’s a threat and it’s really a fairly near-term, potentially catastrophic threat and no one is talking about it. And if you look at what it would take to address that effectively, you have to imagine a very different world from the one we’re in.

Advertisement
Advertisement
Advertisement

Robert Wright: You know, not only kind of none of these wars, but I don’t think we can even afford a Cold War if we’re serious about this. In my newsletter, I have repeatedly kind of everyone sort of thrown out this phrase, the apocalypse aversion project for what I see the non zero newsletter being about. So. So we have exactly the same concern some and I think we agree on the moral philosophy some differences of opinion about strategy and and and how confidently we can think about long term scenarios in some realms.

Speaker 2: And just one other thing I’d say in terms of distinction between my book and what to people who endorse Longtermism go out and do in the world where I actually think my book is wackier than what long term is actually do in the world. You know, I have these digressions into like.

Lizzie O’Leary: Well, you talk about the train to Crazytown.

Speaker 2: So not even. You know. Just if you look at, say, the Future Fund, which is a philanthropic organization, that’s, you know, this year it moved about the first four months, moved about $440 million to generally long term causes or the long term side of open philanthropy is grantmaking. It is just quite boring almost, where it’s like, okay, broad spectrum vaccines, metagenomic sequencing for early detection, perhaps better sterilization technology, better masks against pandemics.

Speaker 2: Track 1.5 Diplomacy against the between nuclear powers in the world. Like how can we get better relations between India and Pakistan? There is a lot on technically safety. And again, you know, that’s hard to understand. I’d love for it to be more a public communication because it’s such a fast moving technical field. But again, it’s like, look, we take these leading language, you know, late leading models. Can we get them to be non deceptive? Can we can we actually just understand what language models are doing? You know, that’s a big focus. And other focus is just better information, better arguments and better data is something that we’ve been focusing on a lot.

Advertisement
Advertisement
Advertisement

Speaker 2: So can we actually get a reliable mechanism for aggregating expert understanding on these issues? Because many of the things we’re tackling, they’re just unfortunately not things where we can get kind of expert data. So what is the chance of a third world war in our lifetime? And you can’t do randomized controlled trial on that. But what you can do is train up the body of people who are very skilled in forecasting and can practice that and short term forecasts and then use that to extrapolate out to longer term forecasts.

Speaker 2: And so, you know, a different line of criticism that Bob could be taking is, you know, I, the philosopher, getting sidelined sometimes by questions. And, you know, I talk about population ethics or value of the future and so on, whereas I think a very reasonable criticism is like, look, we should just be focusing more on these concretes of things that we can be doing that very clearly make the world better in both the short and the long term.

Lizzie O’Leary: For what it’s worth. Short termism seems to wrestle with some of the most pressing criticisms of Longtermism, but it also runs the risk of being myopic. Bob One question I have for you is, you know, let let’s say we’re taking this sort of concentric circle view that I’m thinking about myself and my child and my family and the people I love that also has the potential for tremendous selfishness or a selfishness that could even verge on objectivism, right. That that you are so kind of bound up in what is around you that you can’t go beyond that.

Robert Wright: Well, especially if you stay in the present tense, know it can lead to selfishness. I mean, you know, today, tomorrow, the allocation of resources between my family and another family. But but once you start looking very long term at all and address issues that could make life horrible for kind of everyone on the planet, then, you know, the selfish, selfish focus on your child is much less consequential because you’re talking about a fate they are going to share with everyone else. And I think most of the problems that we’re talking about have that property. And and what I’m saying is, in theory, you should be able to get people just look at the inner part of that concentric circle like, yeah, you, your family, your kids and do a better job of of convincing them if we can’t get them to be rational in that context where selfishness should motivate rationality in principle. Oh man, I’m going to I’m going to get really pessimistic.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: Well, but I think rich white people or people in the Global North can look at that and say, well, my child might not or my my great grandchild might not experience apocalyptic climate change in the way that someone in the global South. I mean, like that’s where I wonder how you do the conversion can happen.

Robert Wright: There are those there are those those differences. They’re not just kind of north global, north global south differences in terms of the differential impact of climate change itself. So they’re complicated. But I, I doubt that the people who are not paying as much attention as we like to these things are very often under that particular illusion that they are somehow magically saved from problems that will have pretty diffuse negative impact, I think.

Lizzie O’Leary: Will How do you want the average person to engage with this?

Speaker 2: Yeah, I think in a variety of ways. So I mean, the first is just to really, really reflect. I think ultimately what I would love to see is a cultural change where we start to take really seriously and just as a matter with model common sense, the interests of future generations, and take seriously the fact that we in the present are doing things that will, you know, sometimes can easily impact the negatively impact the short term too, but also the longer term.

Speaker 2: And then beyond that, I think the ones the thing that’s most accessible to, you know, that follows through into into politics, into policy, because there is this fundamental issue, the future people cannot represent themselves. So the only way we’re going to take care of the futures by doing it by proxy, having an electorate who really cares to and then if individuals are looking for action. I think the you know, the single thing that I think is most impactful that people can do is donate, means you can support the fairly most effective organizations in the world. And so in 2009, I co-founded an organisation called Giving what We Can that encourages people to give at least 10% of their income.

Advertisement
Advertisement
Advertisement

Lizzie O’Leary: The criticism of that, as I know, you know, is that it says to people, you can go work on Wall Street, you can go work for the fossil fuel industry, that as long as you give away a certain amount of your income, it’s fine.

Speaker 2: Yes. So that idea is called kind of earning to give. So going to pursue higher learning career so that you can give more. That’s not really kind of what giving what we can is about. That’s just giving what we can is this 10% pledge. But it is true that I’ve argued that for many people. I mean, still a minority. I think maybe it’s like 10% of people. This is the best path or something. Is that one way of doing good is by setting up a company or taking some of a high earning career in order that you can donate a significant part of your income. In fact, most of the people that I know who’ve done this, they’re normally donating kind of 50%. In some cases, it’s as high as kind of 99% of their income or wealth.

Lizzie O’Leary: Do you still believe that? Like, I wonder after sitting with this for for more than a decade, if you still if your views have changed at all?

Speaker 2: Yeah. I mean, early on when I was giving these ideas, giving these arguments, I was very worried by what I called the burnout and corruption worries. So burnout is just you try and work and, you know, you’re working in Ireland and Korea. You just hate it. Corruption is the idea. You just lose your values kind of over time. You no longer become quite the altruist. And honestly, it’s just been surprising to me how little those considerations have come into play. I think that’s partly because we’ve built a community of people who, you know, really care and the people who are going to give stay in that community. But also I think people just retain their moral commitment over time.

Advertisement
Advertisement
Advertisement

Speaker 2: So I actually think the rate of burnout or like corruption or loss of ideals is considerably lower than people earning to give than it is from people working in the non-profit world where which is fair to be honest, work not perfect world. Often it sucks. Often you’re doing extremely hard, emotionally draining work for very low pay. And I’ve met just so many people who get very burnt out by that. But no, in the case of people earning to give.

Speaker 2: So you mentioned sandbank and feed. He’s planning to give away well over 90. You know, over 99% of his wealth has all that he very unusually even while still running his company already giving close to $200 million this year and is planning to scale that up kind of even more. Other people I know in owning together, donating much more than half of their income.

Speaker 2: So yeah, the kind of level of commitment and follow through, I’ve actually been like pleasantly surprised by.

Lizzie O’Leary: It can be hard when looking at a world facing a pandemic, climate change and the vague but present threat of nuclear weapons to deeply engage with the idea of a distant future. I have a weird question, and maybe it’s an impudent question. After spending some time with your your work and thinking about it, I sort of wonder why you wouldn’t just drop everything and focus entirely on climate change.

Speaker 2: Oh, yeah, it’s an excellent question. And I think that climate change is one of the key moral priorities of our time.

Lizzie O’Leary: But doesn’t it kind of blow past everything else?

Speaker 2: I think that’s not true. So I think that to many, like the point of the book, what we are the future and significant part is a kind of yes end about climate change, where climate change is this huge problem. I think there are many problems that are at least as important. I think I is one. I think the risk of nuclear war is another. I think that engineered viruses, as Bob was talking about, in many cases, these are radically more neglected. So about $300 billion a year is spent on, you know, mitigating climate change. How much is spent on air safety? It’s maybe like 100 million a year or 200 million a year. So we’re talking about a thousand fold difference.

Advertisement
Advertisement
Advertisement

Speaker 2: And one of the things that’s often misunderstood about effective altruism is that we’re always thinking, what’s the best thing to do on the margin, given that we are this like tiny? You know, I as an individual or even as a group, we’re like this tiny force in the world. How much can we just change the global allocation of resources? And that generally means focusing on things that are comparatively more neglected, like the issue of safe development of artificial light, you know, air safety or air governance.

Lizzie O’Leary: Well, I’ve I’ve interrogated you a decent amount. So, Bob, I’m actually going to kind of give you the last word on on what you want people to take away from wrestling with these ideas and and engaging with them in an everyday way.

Robert Wright: Well, I guess I’d add something to what I’ve already said about encouraging people to think in a kind of rationally selfish way. In other words, to think clearly about the welfare of them their family, their kids, their nieces, their nephews, their their grand children. Again, I think thinking rationally about that would go a long way. But I don’t mean to say that there’s no sense in which people need to kind of get out of their selfish perspectives, because if you look. What I think it would take to reduce the amount of international strife and domestic strife to a point where we could start focusing on the problems we need to focus on. And again, some of these things cannot be dealt with in an environment of international conflict of the kind that we have right now.

Robert Wright: And if you ask what would it take to get serious about building a more peaceful and stable world? I think a very big one is to make people better at looking at things from the perspective of people other than themselves. And I don’t mean that I want them to feel sorry for people or sympathize with them. That’s great if they want to do it. But I’m and I’m not saying justify bad behaviour, so I’m not saying now that we understand why a number of Russians supported the invasion of Ukraine.

Advertisement
Advertisement
Advertisement

Robert Wright: Well, go ahead and proceed with the invasion. I’m not saying that. But I am saying that understanding the perspectives that have led to destabilising and horrible things is a good way to prevent them from happening in the future. So there is this is like a hobbyhorse of mine. I’m writing a book on cognitive empathy, which is what I’ve just described, just perspective taking. And, and that is one form of getting out of your selfish. You’re kind of narrow minded, self-centered perspective that I think is absolutely critical.

Lizzie O’Leary: Rob. Right. Well, McCaskill, thank you both very much for talking with me.

Speaker 2: Thank you. Thanks for having us on.

Lizzie O’Leary: Will MacAskill is an associate professor at the Global Priorities Institute at Oxford University and the author of What We Owe the Future. Robert Wright is a journalist who writes the newsletter Nonzero, which focuses on averting the apocalypse. And that is it for our show today. What next? TBD is produced by Evan Campbell. Our show is edited by Tori Bosch and Jonathan Fisher. Joanne Levine is the executive producer for What next? Alicia montgomery is vice president of audio for Slate. TBD is part of the larger What Next Family. And we’re also part of Future Tense, a partnership of Slate, Arizona State University and New America. And if you are a fan of this show, I have a request for you. Become a Slate Plus member. Get all your podcasts ad free. Just head on over to Slate.com slash what next? Plus to sign up. We’ll be back next week with more episodes. I’m Lizzie O’Leary. Thanks for listening.