A writer and military historian responds to Justina Ireland’s “Collateral Damage.”
The histories of the military and technology often go hand in hand. Soldiers and military thinkers throughout the past have continually come up with new ways to fill the people over there full of holes as a means to encourage them to stop trying to do the same to their opponents. After the introduction of a new weapon or the improvement of an existing one, strategists spend their time trying to come up with the best way to deploy their forces to take advantage of the tools and/or to blunt their effectiveness by devising countermeasures.
The development of the Greek phalanx helped protect soldiers from cavalry, the deployment of English longbows helped stymie large formations of enemy soldiers, new construction methods changed the shape of fortifications, line infantry helped European formations take advantage of firearms, and anti-aircraft cannons helped protect against incoming enemy aircraft. The technological revolution of warfare has not stopped, and today, robotics on the battlefield—through the use of drones, automated turrets, or the remote-controlled Flir PackBot—have made appearances in the most recent conflicts.
But while technology brings about new and effective means though which to kill one’s opponents, there remains a constant throughout history: the living, breathing humans on the battlefield who are putting those technologies to work. Properly utilized, such advances can give an army an edge. But it often takes soldiers and strategists some time to fully understand the implications, tactics, and strategies to fully utilize those tools, and with that time comes casualties. Commanders in the First World War wracked up unthinkable casualties sending their men over the top of trenches into machine-gun fire by relying on outdated tactics before realizing that that was a fruitless effort.
Justina Ireland’s short story “Collateral Damage” imagines one of those moments in which human nature runs headlong into inhuman efficiency. She follows a squad of soldiers who are assigned to test run a new type of large military robot, TED, that quickly earns the ire of its companions. Loaded with sensors and weapons, it’s extremely skillful, clearing buildings and hostile streets when the unit is eventually deployed to some unnamed conflict zone across the world. The soldiers are frustrated with TED’s reaction times, how it appears to be spying on them and logging their interactions, and how it might eventually put them out of a job, even while some of them recognize the benefit of its existence: They’d rather have it take a bullet than one of them.
Robots have taken part in warfare for longer than you might think. In Wired for War: The Robotics Revolution and Conflict in the 21st Century, P.W. Singer pointed to some early attempts during World War I: prototypes of remote control bombs on land, air, and sea, aided by the introduction of the radio on the battlefield. World War II brought new innovations like the Germans’ Goliath tracked mine (a remote-controlled, caterpillar-tracked bomb that soldiers steered toward their targets), or the Fritz X 1400, an aerial bomb deployed and controlled by German pilots.
The Cold War brought more robots as the U.S. military began to experiment with drone planes, and it was during the global war on terror that they began entering the battlefield in greater numbers, such as the iconic Predator drone or the larger Northrop Grumman RQ-4 Global Hawk, delivering bombs or conducting surveillance while being piloted from afar. The wars in Iraq and Afghanistan also prompted the use of remote-controlled robots, like the multipurpose PackBot (used for everything from bomb disposal to scouting to gunshot detection), onto the battlefield—something that soldiers could expose to danger for the purpose of looking around without the soldiers worrying about getting shot themselves. Unlike Ireland’s wary characters, some real-life soldiers who have been distraught over the destruction of their squad’s device, even writing to iRobot (which has since spun off its military division into a separate company) to plead for it to be repaired.
As technology continues to improve, so too will the robots that begin to step foot on the battlefield. Boston Dynamics has demonstrated incredible improvement over the past few years with its robots: Internet commentators might have mocked them stumbling about and getting beaten with hockey sticks in 2016, but more recent dance and gymnastics routines show just how far the technology has come. Already, we’ve seen instances of soldiers training with Boston Dynamics’ Spot and other combinations of drones and robots, as well as with autonomous armored vehicles. Those efforts are designed to figure out and refine the tactics and strategies as soldiers figure out the best use cases for them in combat—how they might effectively utilize their robotic companions to give them better eyes and ears on the battlefield.
As these robotic systems pop up in battlefields around the world, we’ll begin to see friction points as they begin taking and returning fire on their own. It’s often said that militaries train to fight the last war, and the prospect of going up against an enemy aided by artificial intelligence is a point of concern for modern-day military strategists—and not just because of the threat robots could conceivably pose.
A couple of years ago, I took part in an exercise at the U.S. Army War College’s Center for Strategic Leadership in Carlisle, Pennsylvania, where our group was tasked with trying to come up with devising the parameters of a war game in which one team would fight against an A.I.-assisted enemy. This wasn’t like The Terminator—it was more like imagining an enemy whose computers had access to reams of data and provided suggestions on the battlefield. Artificial intelligence is, well, inhuman, and it can lead to some creative solutions to problems, like unorthodox moves in the game Go or in sorting an Amazon warehouse. One concern was that such a system might come up with orders that seem counter to their objective or that might just be too out there to follow. Absent trust in the system, those soldiers may simply not carry out the orders, because they can’t recognize or follow its line of thinking.
This isn’t a theoretical concern, either: A recent survey from the U.S. Air Force’s Journal of Indo-Pacific Affairs of 800 Australian officer cadets and midshipmen studying at the Australian Defence Force Academy found “that a significant majority would be unwilling to deploy alongside fully autonomous” lethal autonomous weapon systems.
Trust is essential in the adoption of new technologies: trust that one’s fellow soldiers will hold their shields together, that one’s arrows will fly true, that one’s radio signals will guide a bomb to its target accurately. The Air Force study specifically points out that trust in the safety and reliability of any robotic system is incredibly important: If an officer doesn’t trust the equipment, their subordinates won’t, either. “Without that trust,” the study’s authors write, “the unit is, quite understandably, likely to ignore, minimize, or leave behind that piece of equipment regardless of doctrinal guidance.”
Ireland’s soldiers don’t trust their new teammate: They don’t fully understand how it works, how it comes to the decisions that it makes, or what it does with the data. They’re fearful for their livelihoods in an increasingly uncertain world, and that combination of factors means that while TED can perform its task adequately in the battlefield, it’s ultimately a failure.
But if history is any guide, the soldiers who will fight alongside and against robots will ultimately evolve: They’ll learn how to trust the tools at their disposal, adapt, and, ultimately, survive to see another day.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.