Future Tense

Can Longtermism Save Us All?

Collage of the earth, an hourglass, and the book cover of What We Owe The Future.
Photo illustration by Natalie Matthews-Ramo/Slate. Photo by Trần Toàn on Unsplash and Nasa.

Browsing the bookshelves in Silicon Valley, you’re likely to run across William MacAskill’s recent book, What We Owe the Future. In it, Will, an associate professor of philosophy at Oxford University, makes the case that we, the people of today, need to make the people of tomorrow—and many, many tomorrows from now—one of our top priorities. The idea, known as longtermism, is already beginning to reshape the world of philanthropy and is hot among tech types, as is effective altruism, the movement Will helped found. Elon Musk tweeted about Will’s book, “This is a close match for my philosophy.”

Advertisement

Robert Wright, a journalist, has critiqued Will’s ideas and the philosophy of longtermism in his newsletter, Nonzero. His philosophy of choice has a shorter horizon of moral concern, and he’s dubbed it short-termism. He argues that we should be utilizing the “default human concern” about ourselves, our children, and ourgrandchildren, to create the kind of change that he hopes to see on some of the most pressing current issues.

On Sunday’s episode of What Next: TBD, Will, Bob, and I had a wide-ranging conversation about the increasingly influential worldview of longtermism and where it sometimes falls short. Our conversation has been edited and condensed for clarity. 

Lizzie O’Leary: The elevator pitch for longtermism is contained in three sentences in Will’s book: “Future people matter, there could be a lot of them, and we should do what we can to make their lives better.” Can you tell me how you approach those three sentences?

William MacAskill: So longtermism is the view that we should be doing much more than we currently are to protect the lives of future generations, and that positively impacting the long term is a key model priority.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

I got round to this idea by taking seriously the interests of future generations and the sheer scale of the future. Perhaps we’ll cause our own extinction in this century or the next. But perhaps not, and if not, then we might have hundreds of thousands of years ahead of us. There are things that will impact not just the present generation, but many generations to come.
The risk of all that nuclear war keeps me up at night. My personal best guess is that there’s something like a 1 in 3 chance that I’ll see nuclear weapons used in a war situation in my lifetime, and that’s terrifying. The risk of pandemics far worse than COVID-19. Engineered viruses, weapons of mass destruction coming from biological weapons. Development in A.I. I think there’s just major challenges that we face that are enormously worrying from a near-term perspective and a long-term perspective.

Advertisement

Lizzie O’Leary: Bob, the language that Will uses of course seems morally laudable, and yet you have this crystallized view that you, in your newsletter, called short-termism, so give me the counter.

Robert Wright: Well, first there’s a lot I like about longtermism. I accept the foundational premise that the lives of future people have as much moral value as my life, and I like the fact that it draws our attention to so-called existential risks.

Advertisement

The issue I have with longtermism is whether it is, for most people, the most effective rhetorical pitch for drawing their attention to the things that I’d like to see their attention drawn to. My own preference is to emphasize something that I call short-termism, but I have a very expansive definition of short-termism: people who are concerned about them, their children, their nieces, their nephews, I would call that short-termism because I think that’s what’s natural, that’s the default human concern. You don’t have to work very hard to get people to care about how life is going to be for their children.

Advertisement
Advertisement
Advertisement
Advertisement

I think a lot of people naturally have a horizon of moral concern that extends for several decades, and it seems to me that we have failed to tap into even that. We have failed to get people to focus on various things that should be of great concern to them, and if we can’t do that, what are the chances that we can sell them on the importance of these problems by talking about some guy who would jet-pack to work in 150 years? I worry that focusing on the guy with the jet pack will make this seem just so conjectural, hypothetical it’s easy to dismiss, and will make the whole enterprise seem too sci-fi-ish.

Advertisement
Advertisement
Advertisement

Before longtermism, Will was one of the originators of the effective altruism movement, or EA. EA’s backers say its central tenet is to find the best ways to help other people and then put them into practice. That means quantifying and maximizing the reach of a financial donation. Longtermism might be thought of as EA’s second cousin: make a small investment now in thinking about future humans and the world they’ll inherit with the chance of a big payoff for them. But doing this necessarily requires a degree of abstraction from the present day, and it means making some tradeoffs.

Will, I think there is something in that that is worth engaging. I wonder how you capture someone and make them care when we are constantly surrounded by evidence that people don’t care very much, can’t see past next week?

Advertisement

Will: Well, I think two things. So where I’d strongly agree with Bob is that for many of the things I talk about—these worst case pandemics, risk of nuclear war—there is this very strong short-term argument, too. And a sane world would be doing radically more to guard against some of these risks.

Advertisement
Advertisement

The reason we’re not paying attention to future people’s interests is because it’s hard to empathize with people that I will never meet. They can’t vote, they can’t lobby, they can’t tweet at us, and so what are the solutions? Well, one is just giving arguments that appeal to me, that move me.

But you’re not just a philosopher, you’re also an activist.

Advertisement

Will: Exactly, and I think you’d be surprised by just how many people are morally concerned, and are willing to reflect on their own values to think, “In just the same way I should care about someone wherever they live on the earth, wherever they are in space, I should care about someone wherever they are in time.”

There is a framework in your book, Will, that I would love to hear you guys talk about,  “significance, persistence, contingency.” Can you walk me through that and talk about how it applies practically?

Will: So there are many challenges and problems that the world faces, and if we’re taking a long-term view, how should we chose which are the most important? Not necessarily the ones we should focus on, because you also need to think about how much other people are focusing on them. And I break that down into, as you say, significance, persistence, and contingency. The significance of an event is how big a difference it makes at any time to how good or bad the world is; persistence is how long it lasts for; and contingency is whether that event would’ve happened otherwise.

Advertisement
Advertisement
Advertisement

So take the loss of some species, for example. Well, you can think about how much better is the world in virtue of having that particular beautiful creature. You can think about persistence—if we lose a species, do we get that back at some point? And then the final question is contingency, where perhaps you manage to save a species from extinction, but it would go extinct just a few years later anyway by forces outside your control.

Advertisement
Advertisement

That feels like an awful lot of steps to ask an average person to engage in. Bob, how do you think about that framework?

Bob: I accept the framework in principle, but if we focus on the persistence part, how important is it to convince people that the persistence of something goes beyond 50 years? As a matter of moral philosophy, the exact extent of the persistence of the damage does matter, but in terms of the rhetorical leverage we have, I don’t think it adds much, and it may be an unhelpful distraction in a certain sense. Job one is getting people to focus on the things that they naturally focus on: the welfare of them and their kids. If we can’t do that, we are in deep, deep trouble, and I just worry that longtermism may not help at that point.

Advertisement
Advertisement

Effective altruism, longtermism, and by extension Will himself, have become increasingly influential in Silicon Valley. Will has the ear of Elon Musk—in fact, his texts showed up in the court fight between Musk and Twitter—and he’s close with the crypto billionaire Samuel Bankman-Fried, who has put millions of dollars into the EA movement. Longtermism can often feel like a way to justify focusing on what one wants to focus on instead of some current pressing problem. You could make the argument, “well, I’m worried about future runaway A.I.,” rather than thinking about the significant racial discrimination that has already been built into A.I. systems right now. How do you balance those approaches, particularly when you see how popular your work is, Will, among people in Silicon Valley?

Advertisement

Will: I think this is a misunderstanding, and I really want to push against it. The dominant, if there is a dominant Silicon Valley view, is normally accelerationist about technology because they see, wow, all the great things that technology can bring. Longtermism is actually a counternarrative. It’s instead saying, “no, technology can be very dangerous, too,” and “yes, there’s enormous amounts of money to be made within the field of AI, but perhaps we should be slowing it down, perhaps in fact we should be working on non-profit projects around technical safety.”

Advertisement
Advertisement
Advertisement

For the people who are really working on and worrying about risks from extremely powerful A.I. systems, they think those risks are coming in the next decade or two. It’s more that people, at least from their perspective, people are drastically underestimating the pace of change.

Advertisement

Bob: I worry about the pace of change, too, but I largely worry about it in a slightly different sense, which is just that some of the more mundane applications of digital technology, like social media, are advancing so fast that they are disturbing human social life at a pace that we’re having trouble keeping up with.

So for example, take the much discussed political polarization or tribalism in America and the world. I agree with a lot of people who think that is to some extent a product of our failure to yet come to terms with the way social media algorithms bring out the worst in us and spread that around. But when I try to get people to focus on the problem, and how it intersects with the intrinsic problem of the human psychology of tribalism, it seems my best hope is to say, “Hey, wouldn’t you like to straighten America out right now and not have us keep going down this slippery slope towards civil war, or something worse?” Again, that doesn’t exclude Will going around and doing his thing and appealing to the people who resonate to that, and then talking about the A.I. problem. If we can get more people thinking about the problem of keeping the whole human project on track, that’s great.

Advertisement
Advertisement

When we’re thinking about future people or current people, there’s the idea of a good life. How can we say what a good life is? For example, Peter Singer has, perhaps rightly, gotten a lot of crap from the disability community for his views on what a good life is, and I wonder if you could talk about how we balance wanting to promote the maximum benefits to humanity without straying into maybe a questionable judgment of what good and productive and joyful is.

Advertisement

Will: So there are various philosophical views on what constitutes a good life or well-being. I think in practice they actually don’t matter that much, because the single best guide we have to what makes someone’s life go well is their carefully considered preferences about their own life. This has been one of the great lessons championed by the disability rights community: Able-bodied people might overestimate how bad certain disabilities are. I think people also underestimate how bad depression is. But ultimately I think we should just ask people.

Advertisement

If the project were to really carefully work out what people in the year 2300 would exactly want and try to give up to them, that would obviously be hopeless. The things we’re focusing on are firstly just very basic, and secondly just providing more options for future people. So how about we have a world that isn’t a post-nuclear Armageddon apocalyptic hellscape? Seems like a broad array of people will be on board with that.

Advertisement
Advertisement

Bob, where do you think this question of what kind of life we should be creating is difficult? Where does it hit the rocks? Because I worry that it is full of implicit judgment.

Bob: I think for the most part Will is right, in suggesting that that question doesn’t really arise much in a direct way with most of this. In other words, what Will and I are saying is life is better than not life.

Advertisement

But I think it does as I have watched longtermism out in the world. So perhaps not in Will’s book, but as I’ve seen it embraced by different people.

Bob: Sure. And there is the question of, if the A.I. could turn us all into the equivalent of brains in a vat, we’re all in pods and our brains, they’re pumping these pleasant hallucinations into our brains—is that a good life? That’s an important question, actually, and I think in a way that’s one of the less farfetched scenarios if you want to look way, way out.

Advertisement

Bob, let’s say we’re taking this concentric circle view, that I’m thinking about myself and my child and my family and the people I love: that also has the potential for tremendous selfishness, or a selfishness that could even verge on objectivism.

Advertisement
Advertisement

Bob: Well, especially if you stay in the present tense, it can lead to selfishness. But once you start looking very long term at all, and address issues that could make life horrible for everyone on the planet, then your selfish focus on your child is much less consequential because you’re talking about a fate they are going to share with everyone else.

Well, but I think rich white people, or people in the global north, can look at that and say, well, my child might not, or my great-grandchild might not experience apocalyptic climate change in the way that someone in the global south would. That’s where I wonder how you do the conversion.

Bob: There are those differences, they’re not just global north, global south differences in terms of the differential impact of climate change itself. But I doubt that the people who are not paying as much attention as we’d like to these things are very often under that particular illusion that they are somehow magically saved from problems that will have pretty diffuse negative impact.

Advertisement

Will, how do you want the average person to engage with this?

Will: In a variety of ways. The first is just to morally reflect. I think ultimately what I would love to see is a cultural change where we start to take really seriously, and just as a matter of moral common sense, the interest of future generations and the fact that we in the present are doing things that will sometimes crazily, negatively impact the short term and the longer term. And that follows through into politics, into policy, because there is this fundamental issue that future people can’t represent themselves, so the only way we’re going to take care of the future is by doing it by proxy, having an electorate who really cares too.

Advertisement
Advertisement
Advertisement
Advertisement

And then if individuals are looking for action, I think the single thing that I think is most impactful that people can do is donate. And so in 2009 I co-founded an organization called Giving What We Can that encourages people to give at least 10 percent of their income.

The criticism of that, as I know you know, is that it says to people, “you can go work on Wall Street, you can go work for the fossil fuel industry. As long as you give away a certain amount of your income, it’s fine.”

That idea is called earning to give, pursuing higher-earning careers so that you can give more. That’s not really what Giving What We Can is about. Giving What We Can is this 10 percent pledge. But it is true that I’ve argued that for many people, I mean still a minority, one way of doing good is by setting up a company, or taking some other high-earning career, in order that you can donate a significant part of your income. Most of the people that I know who’ve done this, they’re normally donating 50 percent, in some cases it’s as high as 99 percent of their income or wealth.

Advertisement
Advertisement

Do you still believe that? I wonder, after sitting with this for more than a decade, if your views have changed at all?

Will: Early on when I was giving these arguments I was very worried by what I call the burnout and corruption worries. So burnout is you’re working in a higher earning career and you just hate it. Corruption is the idea you just lose your values over time—you no longer become quite the altruist. And honestly it’s just been surprising to me how little those considerations have come into play. I think that’s partly because we’ve built a community of people who really care, and the people who are earning to give stay in that community. But also I think people just retain their moral commitment over time.

Advertisement
Advertisement

After spending some time with your work and thinking about it, I wonder why you wouldn’t just drop everything and focus entirely on climate change?

Will: It’s an excellent question, and I think that climate change is one of the key model priorities of our time.

But doesn’t it blow past everything else?

Will: I think there are many problems that are at least as important. I think A.I. is one, I think the risk of nuclear war is another, I think that engineered viruses are another—in many cases these are radically more neglected. So about $300 billion a year are spent on mitigating climate change. How much is spent on A.I. safety? It’s maybe $100 million a year, or $200 million a year.

Bob, I’m going to give you the last word. What do you want people to take away from wrestling with these ideas?

Bob: I’d add something to what I’ve already said about encouraging people to think in a rationally selfish way. In other words, to think clearly about the welfare of them, their family, their kids, their grandchildren. But I don’t mean to say that there’s no sense in which people need to get out of their selfish perspectives. And if you ask, what would it take to get serious about building a more peaceful and stable world? I think a very big one is to make people better at looking at things from the perspective of people other than themselves. Understanding the perspectives that have led to destabilizing and horrible things is a good way to prevent them from happening in the future.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement