Everyone knows the foreign threats our government deems urgent: cyberwar, a criminal North Korean regime, an aggressive Russia leader. (OK, not everyone in our government … ) But how does the military prepare for the threats which will eventually arise, but are not yet known? Peter W. Singer is a strategist and senior fellow at New America, and an expert on military technology and planning. And he spends his time studying what threats America is likely to face, and how the armed forces should prepare for them.
I recently spoke by phone with Singer, whose latest book (co-written with August Cole) is Ghost Fleet: A Novel of the Next World War. During the course of our conversation, which has been edited and condensed for clarity, we discussed the ethical considerations involved in pursuing new military technologies, the ways in which the military tries to stay ahead of the game, and why the Trump administration’s unwillingness to take global warming seriously is so dangerous.
Isaac Chotiner: What is the process by which the military thinks about future threats and prepares for them?
Peter W. Singer: Like it or not, everyone is a futurist in some way, shape, or form. For the military, that means wrestling with everything from how it envisions the future threat to the environment, to how it budgets for what weapons to buy, to how it trains individual soldiers as they go into basic training. When you’re thinking about this space of “wrestling with the future,” it really encompasses almost everything that the military does in a certain way. Even military history programs are about going back and looking at the past, not for its own sake, but for lessons to mine for the future.
Is there a way over time, just in terms of big picture, that the military and the way it thinks about the future has changed, say, in the last 10 or 20 years?
There is a greater application of technology into this space. One example is the ability to run simulations on scale, and that might apply to individual level training or to how you might look at how certain weapon systems might fare in a future battle. They’re not just imagining it, they’re running a simulation. That’s probably one of the biggest shifts.
It’s funny because a lot of attempts at defining the future are really about wrestling with something that’s changing today. If you look at these predictions of the world in 2030 or 2040, they’re often politically correct ways of talking about today. An example would be after the Arab Spring, suddenly people started talking about the impact of breakup inside a country, and the power of social media, and they were reading out into the scenarios in the 2030s and you’re like, “Hey, that actually happened last year.”
What’s changing now is the resurgence of the state and state-level threat, and so you can see how everything from the war games to the long-range trends went from talking about terrorism and counterinsurgency as their focus, to more and more on states and what we call hybrid warfare. Basically, what we saw in Ukraine.
I was about to say Russia, yeah.
Yeah, exactly. Russia, but Russia not merely deploying armor and artillery but little green men and cyberattacks. A big focus for the military, coming out of when H. R. McMaster was working on the Army side of this prior to his current job in the White House, is what we call multi-domain battle. The idea that what’s changing is actually a swing back to the past where you’re not just fighting on land, but you’re simultaneously going at it in air, sea—and then the twist versus 70 years back—in space and cyberspace.
Is there some aspect of these preparations that you think the military is really good or really not-so-good at?
I think what they do really well is apply this massive scale of expertise, of investment in wrestling with the future. There’s a huge amount of human talent within the military that they tap. And one of the things that’s to our strength, compared to potential adversaries, is they also pull in from outside the military all sorts of expertise. It ranges from historians to the Mad Scientist Program, which is literally a program that pulls in a mix of scientists and science fiction writers to help them envision the future. You don’t see that sort of thing happening in an Iran or North Korea or a China. That vast scale, that vast architecture, the willingness to tap others, it’s all great.
Now, one of the challenges is that the scenarios that they choose in a lot of these war games are challenging but comfortable. They’re almost always things that are tough but somewhat expected scenarios. A couple years back, the scenario was, “Ah, we’re going to have to make an armored truck to go seize a capital of an adversary state,” and they’re like, “Oh, you mean just like you did in Baghdad?” More recently it’s a counterinsurgency exercise and you’re like, “Oh, just like you’re doing in country X.” In contrast, the Air Force/Navy scenarios tend to be ones that match their service culture, so it’s often one that looks like an outright conflict in the Pacific. The challenge is when you think about potential scenarios that are unexpected or difficult, they run counter to service culture, or there’s some kind of twist, and that’s where we don’t do so well. Those are tougher for the system to even authorize looking at. It’s the difference between an outright war on the Pacific versus, “Hey, you’ve been asked to implement a new kind of version of the Cuban missile crisis. Keep X from happening, but don’t let it get into conflict.”
Is there some country that is thought of as being particularly good about planning for the future?
That’s a really tough question. In the military space, no. Everyone wrestles with this. Different countries have their approaches. The U.K., Israel, Singapore, these are all ones that have programs looking at it and they’re drawing on expertise, but no one’s really mastered this. Here’s a different way of putting it: You don’t know who got the future right until the history part happens. Let’s go back to a period that a lot of people, including myself, think is a parallel today in terms of a mass geopolitical but also technological shift, and that’s that period surrounding World War I where you have new science fiction–like technology emerging, but you also have shifts in global powers and the like.
Look at the 1920s and 1930s. There were different concepts of how best to use the tank, or similar to different concepts of how best to use the aircraft carrier. You really couldn’t know who got it right until the actual battle began. Should the tank be mixed to cross the force going as slow as the infantry, or should you gather it into a single armored punch Blitzkrieg? How can it be integrated with these other new technologies? Back then, it was the airplane and radio. Each of the sides thought they’d figured it out. When you ask who’s got it right, it’s hard to answer that until the history part happens.
There are, however, lessons that you can learn on who’s approaching it right. If you go back and look at the 1920s and 1930s, it’s about being aware of the new technologies, not being in denial about the potential disruption that they cause. The arguments against the tank were everything from what it could and couldn’t do, to things about the identity and culture within the military, and it ranged from the cavalry officers to leadership roles. Britain had a tough time dealing with the tank because of its old regimental system, which is the idea that officers took their identity from this particular kind of unit that had a history going back centuries. That culture was more important to them than a new, more effective approach. The French had a tough time dealing with the tank because it fed into bigger discussions around having a professionalized military versus a draft force, and concerns on military coups and domestic politics and all that.
If you go back and look at who won out in the past, it was the ones that were willing to look at the new technology, look at the different scenarios, mine lessons from the last wars of what had worked and what had not, also engage in major war-gaming and training exercises where the winners and losers weren’t predetermined. For the United States it was, I think, the fleet problem exercises, which was how year after year the U.S. Navy experimented with these new things called aircraft carriers and finally figured out, “Hey, they’re different than battle ships and we’ve got to operate them differently.” Even then, it took Pearl Harbor to fully make that kind of change.
How good of a job do you think our military does of wrestling with the huge ethical implications of new military technologies and forms of warfare?
They are definitely wrestling with them. Actually, the challenge is the opposite of what people might expect, at least within the U.S. system, where they assume too often that some kind of legal or ethical concern will keep the technology from being used. You can see this around how we talk about lethal autonomous weapon systems, killer robots. It’s a little bit parallel to how people thought about submarines back in the day. It wasn’t just that it was hard for Navy leaders to envision that submarines could be more useful against a dreadnought, a battleship that had ruled the waves, but it was also hard for them to envision that submarines could be used against civilian shipping.
You can go back and find in the writings prior to World War I, in that both the British and the Germans that were going to use them that way, they just couldn’t imagine that anyone would unleash the submarine in such a way. They thought it would be used how naval warfare had happened for centuries. You go after civilian shipping, you reveal yourself, you search the ship, and only then do you destroy it if it has contraband. Well, that didn’t work out with submarines. It wasn’t workable that way.
The same thing today, you will see people talk about certain weapon systems like robotics and say, “Ah, but we would never cross that path.” The reality is that we are. You can see that similar thing playing out relative to cyberattacks, where there were certain norms that people thought would not be crossed, and we can already see them being crossed right now in scary ways. That should mean that you say, as we did about nuclear weapons, that a line was crossed, and we must say, “OK, how do we figure out how to keep this from happening again? How do we set up deterrent structures? How do we set up norms around it?” Similar things need to happen around new technology, but again when you participate in some of these programs they’ll go, “Oh, but no one would cross that line.” Actually they would or they already have.
It’s also difficult for people to think in terms of the contexts of future wars that might differ from today’s fights of counterinsurgency. A good example that relates to this would be how they think about collateral damage and civilian casualties with robots and having a man always in the loop. I remember being in one of these war games where they talked about how in a future war, if there was going to be an airstrike, we would do it in a way we do it right now in Afghanistan, where you have a targeting officer in the field and they communicate back to the command center, there’s JAG—a military lawyer—weighing in on the target, and where we’ve monitored the target for long periods of time to confirm the information. Their vision was this highly deliberative process. That’s how we operate in Afghanistan right now. Imagine instead we’re in a great power conflict. You won’t just have a decision on whether to strike one target that can be talked out. You might have 100 of these decisions that need to happen simultaneously in a matter of seconds or nanoseconds. Do you think you’re going to have this deliberative process? Sorry, it’s not going to be that way because then it would be a different kind of war.
How do you think the military is doing preparing for the consequences of climate change?
It is frankly doing much better than other parts of the U.S. government right now.
One, because it’s willing to acknowledge that it is happening, and two, it’s wrestling with the implications of it for everything from the future threat environment to the kinds of operations it might be called to undertake, to what it means for U.S. military bases and infrastructure. We’ve seen this in Syria. A drought there pushed people into the city, drove food-price changes, upset disruption, and led to protests, which then the regime put down. This was the original spark to it. Was it the only cause of the war? Of course not. But was it a contributing factor? Yep. Military is looking at how climate change is driving this, or the movement of climate refugees, et cetera, to saying, “What does this mean for the operational environment?” and it’s wrestling with that in its reports, but it’s also looking at it in things like, “OK, we have faces, huge amounts of real estate in areas that are now more susceptible to everything from flooding to superstorms.” It’s actually acting quite maturely in recognizing the science compared to the Environmental Protection Agency, oddly enough.
Is there a fear that a military led by this president for eight years will change its posture on climate change?
There is a concern about it, yes, and you can see that in areas where the military wants to be thoughtful and deliberative about a certain issue, and then there’s a tweet that goes out from the commander in chief and a scramble to figure out everything from “what the heck does it mean?” to “hold it, do we have to implement this?” The fact that the Trump administration has been so slow to nominate people into leadership positions has been both a blessing and a curse.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.