Telepathic helmets. Grid-computing swarms of cyborg insects, some for surveillance, some with lethal stingers. New cognitive-enhancement drugs. (What? Adderall and Provigil aren’t good enough for you?) Lethal autonomous robots. Brain-chip-to-weapon platform control systems on a “future force warrior” platform. American military technology is getting very frisky. And most of these technologies will eventually make their way back to civil society with impacts that will probably be more complex and more difficult than anyone can predict.
It’s tempting to call a stop to deployment of such emerging technologies. But that approach promises to be as successful as “Just Say No” was in the war on drugs, or as the original Luddites were at the dawn of the Industrial Revolution. The United States will develop these strange and alarming new technologies, but not, perhaps, for the reasons that many suspect.
The military is tasked with projecting American power around the globe—that, after all, is what the military of a dominant world power does. But Americans are increasingly allergic to casualties, meaning that this projection of power must cost very few, if any, American lives. It’s difficult to argue against development and deployment of technologies that are saving American lives. And this American military is looking at very ugly demographics. The boomers are set to retire. The military will find itself competing with private firms hungry for talent and able to pay a lot more than enlisted salaries. The benefits that private firms can offer, such as not getting blown apart by an IED, make it difficult to compete.
This puts the military under the same kind of serious pressure for efficiency—mission accomplishment per unit soldier—that has long characterized the private sector. And the only real long-term solution is the same in the military as in private business: rapid substitution of capital for labor. Robots replace wet-ware soldiers. Cyborg insects replace infantry squads. In this regard, notice that the Predator being flown over Afghanistan by a soldier outside Las Vegas isn’t even good enough: It may reduce American casualties, but it still uses one body for one weapon platform. We can’t afford that, which is why lethal, autonomous robots are coming.
The other critical reason that these technologies will be developed is that we have run up against the limits of our brainpower. Consider, for example, the cluster of technologies that go by the moniker of “augmented cognition,” or augcog—such as optical systems that scan the battlefield, identify potential threats, and prioritize them before feeding them to the soldier. Or, if you prefer the civilian variant, consider proposals by GM, Ford, and others to build intelligent cars so that seniors can drive even as their onboard wet-ware cognitive systems (“brains”) fade. Both of these reflect, in different ways, a fundamental design truth: In increasingly complex systems with a high flow of critical information, such as a battlefield or a superhighway, it is the human bandwidth that’s the limiting factor. Put bluntly, we’re not conscious enough, or smart enough, for the conditions we have created for ourselves. Solution? Slide cognitive function onto coupled technology platforms. Search engines substitute for memory. Robotic systems (as on the U.S. Navy’s Aegis cruisers and destroyers) substitute for human judgment in time-compressed, information-dense decision theaters. The Cartesian human dies and is replaced by the spatially and temporally networked self—not because we want it to be that way but because it has to be that way.
Civilizations take advantage of technology or they decline. Where is the Islamic world, which clearly dominated science, mathematics, and technology about a thousand years ago? Where is China, which until 1800 was the biggest economy in the world and certainly more advanced technologically than Europeans? (Let the record note that China feels pretty passionate about this temporary drop in the league tables and has every intent of reversing it.) How successful have major world players been in stopping technologies that they, for one reason or another, detested? The Europeans have failed miserably in trying to restrain genetically modified agricultural technology, for example. Technological competence may not be sufficient for global hegemony, but it is necessary, and all the big players—the United States, China, India, Brazil, Russia, and other perhaps longer shots—know it.
We clearly have not developed the individual or institutional capability to understand, much less deal with, the larger implications of such technology systems. Nevertheless, the take-away shouldn’t be either fatalistic pessimism or undue technological optimism. A rational and ethical response is not denial—the Byron-esque pose on the cliff with the wind blowing your hair back, rejecting evil technologies through sheer force of will—but, rather, a difficult, complicated, messy, and constantly changing effort to manage technology systems and their implications. The American military, for one, has a robust review program that includes legal and ethical analysis for technologies as they impact its operations. Thinking about the consequent civil-society implications of those technologies is where we fall down. It is there that we must focus on building the institutional and political mechanisms we need to manage these technologies rationally and ethically.
The article is being published in conjunction with “Warring Futures: How Biotech and Robotics Are Transforming Today’s Military—and How That Will Change the Rest of Us,” a May 24 conference in Washington, D.C., sponsored by Slate, the New America Foundation, and Arizona State University. You can sign up to attend the event here. Read an article by Fred Kaplan about how the nature of war limits the use of technology and by P.W. Singer about whether it’s dangerous to let drones fight for us.