One day, robots will present difficult legal challenges. This seems to be the consensus among commentators. And who am I to disagree? I have myself argued, right here on the digital pages of Slate, that robotics will generate no fewer puzzles for the law than the last transformative technology of our time—the Internet. Future courts will have to decide, for instance, whether a home robot manufacturer is responsible for the apps that run on it and whether to hold anyone accountable for robot behavior no one intended or foresaw.
So I’m in agreement with the many scholars, journalists, and others that see interesting times ahead for robotics law and policy. It turns out, however, that there are just as interesting times behind.
Robots have been in American society for half a century. And, like most technologies, they have occasioned legal disputes. A small team of research assistants and I went back and looked at hundreds of cases involving robots in some way or another over the past six years. The cases span a wide variety of legal contexts, including criminal, maritime, tort, immigration, import, tax, and other law. Together they tell a fascinating story about the way courts think about an increasingly important technology. (You can read the full paper, “Robots in American Law,” here.)
In many of the cases I came across, the role of the robot was incidental: The case would likely have come out just the same way were it not a robot at issue. Some of these incidental cases were fascinating. Nannuzzi v. King et al. (1987) involved an injury on a movie set where a robotic lawnmower malfunctioned and injured a cameraman. The film, written and directed by Stephen King, was Maximum Overdrive—a film about machines coming alive and attacking people. Nevertheless, the legal issue presented by a falling stage light would have been basically the same.
In other cases, however, it really seemed to matter that a robot was at issue. In White v. Samsung (1993), for example, a federal appellate court had to decide whether a robot version of Vanna White in a Samsung print ad “represented” the game show hostess for purposes of the right to publicity. The majority thought it did. The dissent was adamant it did not. “One is Vanna White,” said the dissent, “The other is a robot. No one could reasonably confuse the two.” Just a few years later the same court encountered a second case of robots emulating people—in this instance, Cliff and Norm from the television shows Cheers. Judge Alex Kozinski’s eventual dissent from a decision not to rehear the case began with the words, “Robots again.”
In Comptroller of the Treasury v. Family Entertainment Centers (1987), a Maryland court had to decide whether life-size, animatronic puppets that dance and sing at Chuck E. Cheese restaurants trigger a state tax on food “where there is furnished a performance.” The court went on at length about the nature of the term performance and why a robot could not display the requisite spontaneity. Whereas in Louis Marx & Co. and Gehrig Hoban & Co., Inc. v. United States (1958), a customs court had to decide whether a “mechanical walking robot” being imported represented an animate object (and therefore a doll), which is taxed at a lower rate. The court went on to draw a distinction between a robot—which represents a human—and the toy in question—which only represents a robot.
Courts have also had to decide whether a robot can extend a person into new spaces. In Columbus-America Discovery Group, Inc. v. S.S. Central America (1989), a court had to decide whether a salvage operation was entitled to exclusive rights to a shipwreck. The usual way to establish the right to salvage was by physically diving to the wreck or else bringing the entire wreck up above water. Here, conditions were such that the salvage operation could only reach the wreckage—which was full of gold from the California Gold Rush—through unmanned submarines. Undaunted, the court announced a new, robot-specific doctrine of “telepossession” that survives to this day.
These and the many other examples I came across demonstrate how difficult it seems to be for judges who confront robots to draw the line between person and instrument. This difficulty should hardly surprise us. Research in psychology by Peter Kahn and others shows that people in general struggle with how to characterize anthropomorphic technology. So much so that Kahn and his colleagues propose a separate “ontological category” for robots somewhere between alive and inanimate.
Nor should we be surprised that judges find themselves turning to robots as a metaphor in contexts where a person may appear to society as less (or more) than fully human. I was struck by the court in Frye v. Baskin (1950), which found no fault in a young woman who couldn’t drive crashing a car while under the instruction of a young man because she was only his “robot.” I was struck by Judge Higginbotham’s claim in Commonwealth of Pennsylvania v. Local Union 542 (1974) that white judges can be people with passions and experiences but black judges are expected to be robots. And I was struck by how often immigration courts would reject asylum claims on the basis that the applicant appeared to be “robotic” is his or her testimony.
We cannot predict in advance how robotics, an increasingly sophisticated technology, will transform law and legal institutions. But what seems clear from 60 or so years of courts struggling with robots is that the path of robotics law will be a winding one. And it has the potential to tell us just as much about ourselves as the robots.