The new film Chappie, directed by Neill Blomkamp of District 9 fame, has been widely panned by critics (though audiences are slightly more upbeat). Many have already suggested various failings of the movie, such as its avoidance of controversial political issues. As academic researchers working on related issues, we were somewhat frustrated by the film’s portrayal of artificial intelligence and robotics, which is overly anthropomorphic and mostly implausible. But for all of its flaws, Chappie gets some things about robotics right. Among them: It aptly captures the moral complexity of modern research and development, particularly when militaries, police, and defense communities are the primary owners of the technology.
In Chappie, a robotic police force in South Africa is deployed in response to surging crime rates. But the real trouble starts when the lead robotics developer uploads “consciousness” to a discarded robot scout—without his company’s permission. (Apparently “.dat” is the preferred file format for consciousness.) The newly sentient scout disappears, leaving the robotics industry scrambling to get it back under its control. A small gang captures the scout (and names it “Chappie”), in a scheme to turn it into a robot criminal to help them make money and save its own skin from a murderous thug. Meanwhile, within the robotics company there is friction between those wanting to increase robot autonomy and those (personified by Hugh Jackman’s character) preferring to develop robotic exoskeletons to keep humans “in the loop.” In the internal machinations and drama of the company, called Tetra Vaal, we see many of the hallmarks of today’s real debates around the ethics of artificial intelligence and robotics, particularly when the military and police forces are working closely with big technology companies.
Military technology no longer consists of just guns, ships, and satellites—it’s sophisticated, integrated hardware/software systems that can include nascent decision-making capabilities, like drones that automatically detect and destroy improvised explosive devices in Afghanistan. The military employs human decision-makers, especially in tactical scenarios, but some basic steps toward autonomy are being developed.
A current hot topic of debate is the increased use of military technology by police departments. This concern, as embodied by the continuing conflict in Ferguson, Missouri, is so significant that the president created a new Law Enforcement Equipment Working Group to determine how state and local police can use military technology. We hope that a broad social influence on technologies like A.I. can help to create machines that are less threatening and not prone to human failings like racism, fear, and the adrenaline rush. But for that to happen, people and organizations outside the military need to play a role in developing the technology.
One of the particular challenges in creating autonomous technology is how and whether we teach it to make “good” (that is, ethical and moral) decisions. Chappie displays a wide range of humanlike perceptual and cognitive biases that are the specific products of human evolutionary heritage, rather than something intrinsic to intelligent being in general. For example, Chappie “naturally” fears loud noises, displays “whole object bias” while learning English, and adopts a humanlike interpretation of the term promise (even though the robot’s experience suggests that “promises” in the real world are often empty). None of these capabilities or inferences are necessarily implausible in a programmed system, but to think that they would all emerge spontaneously, along with broadly humanlike ethics, is to miss one of the most important things about artificial intelligence and robotics: They can be humanlike at times, but they can also be deeply inhuman in how they learn to represent and manipulate the world.
Consider a real-life A.I. example. Google DeepMind’s deep reinforcement learning system is able to learn to play a wide range of Atari games with just the pixels on the screen, a list of the buttons that can be pressed (and at first, no knowledge of what they do), and the running score as inputs. Unlike Chappie, the DeepMind system does not spontaneously develop ethics as a result of learning from experience. To the contrary, the system quickly learns to ruthlessly exploit the mechanics of the game in order to maximize long-term score, including glitches that humans were previously unaware of. This example belies Chappie’s optimistic presumption that robots will necessarily learn the best and not the worst of our human natures by default, as opposed to through careful foresight and design.
Until recently, robotics/A.I. development was funded almost entirely by the government and performed by a few large defense contractors. Thanks to improved accessibility and ubiquity of the relevant technologies and concepts, we may actually be seeing a move away from primarily military stakeholders in this sector. Google, for example, has acquired Skybox (a satellite imaging company) and Boston Dynamics (the advanced robotics company of “Big Dog” fame). Missy Cummings—a former U.S. Navy A-4 and F/A-18 pilot, the director of the Humans and Autonomy Laboratory at Duke University, and member of the Stimson Center’s Task Force on U.S. Drone Policy—stated recently “the barrier for entry to building a drone is incredibly low. … The line between what is military and what is civilian, what is toy and what is not toy, is very blurry.” (Her remarks came at a Future of War conference in Washington, D.C., sponsored by Arizona State University, where we both work, and New America; Future Tense is a partnership of Slate, New America, and ASU.) Sophisticated drone production now happens at universities and commercial labs, in backyards and garages.
The United States’ clear military advantage disappears when development is distributed, but the growing diversity in building the technology is positive in the long term. Decentralizing tech development leads to innovation, surprises, additional utility, more adaptability—improvements that may lead to better battlefield decisions as well. A democratic debate with a wide range of participants can help address slippery questions about values and how to teach ethics and morals to emergent intelligence—discussions that are difficult to hold in a single-stakeholder environment.
Researchers around the world are actively working on the philosophical, technical, and political questions involved in building moral machines—what values are sufficiently specific and computationally tractable to be implemented in computer code? How ought they inform machine decision-making? Are those the same values that should inform human decisions? And how, politically speaking, should society govern the decision-making processes by which increasingly intelligent and powerful systems are given goals? Should people be able to reprogram their robots to commit crimes, as happens in Chappie, and is there really any way to stop that from happening? Research and debate on these topics is in the early stages, but there are already reasons to think our future A.I. and robotic technologies will be expected to reason about ethics differently from how humans do (which itself is still an unresolved question). For example, Bertram Malle of Brown University and colleagues recently reported the results of a study in which humans were presented with a variation on the classic “trolley problem” in ethics. (In the “trolley problem,” one person might be killed or allowed to die in order to save others.) In one case a human was the decision-maker, and in the other, a robot. On average, humans held the robot to different standards, expecting it to act more utilitarian than they expected humans to.
The answers to these questions are by their very nature contested and political, as they involve balancing the interests of a wide range of people and groups. However, in a more inspiring example of machine learning than the DeepMind Atari one we mentioned earlier, researchers Susan Leigh Anderson and Michael Anderson developed a program that was able to learn from labeled examples of ethical case studies how to weight “prima facie duties” such as respect for autonomy and beneficence in a medical ethics context. In the end, the program concluded, “A doctor should challenge a patient’s decision if it isn’t fully autonomous and there’s either any violation of non-maleficence or a severe violation of beneficence”—which hadn’t previously been explicitly stated but was found compelling by many human ethicists. Machines are able to quickly and reliably process large data sets, follow long chains of reasoning that humans can’t, and derive new concepts rendered invisible to us by our perceptual and cognitive biases. So the Chappies of the future may be more like advisers than merely actors in the world, as we bring the best of human and machine skills together. As New York University professor of psychology Gary Marcus notes, “In its best moments, ‘Chappie’ can be seen an impassioned plea for moral education, not just for humans but for our future silicon-based companions.”
Perhaps the traditional defense monopoly on certain technology development is nearing an end. And for the best—if we can’t build long-term vision from diverse perspectives into high-impact research projects, the potential of that research will be stunted. Right now humans must still tell machines what to do. If machines advance to the point where they can decide for themselves, let’s provide them with nonmilitarized options to choose from, too.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.