This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. On Wednesday, Jan. 20, Future Tense will host a lunchtime event in Washington, D.C., on human-robot interaction. For more information and to RSVP, visit the New America website.
In his 1993 album No Cure for Cancer, comedian Denis Leary satirized animal rights by claiming: “We only want to save the cute animals, don’t we? Yeah. Why don’t we just have animal auditions. Line ’em up one by one and interview them individually.” Understandably, supporters of animal rights frequently objected to Leary’s oversimplification—otters might do cute little human things with their hands, but there’s room for bovine legal protection.
In a way, the animal rights activists’ approach is a good template for the way we should approach robots’ rights. Save the cute robots, but save the practical, homely ones as well. And by save, I mean create defined legal rights and protections for them.
The cute robots are already well taken care of. Researchers like Kate Darling, research specialist at the Massachusetts Institute of Technology Media Lab and a fellow at the Harvard Berkman Center, and Yueh-Hsuan Weng, co-founder of the Robolaw.Asia Initiative at Peking University, are exploring the benefits of providing legal protections to robots we befriend.
Darling has written and presented on the topic of extending legal rights to “social robots,” which she describes as a “physically embodied, autonomous agent that communicates and interacts with humans on an emotional level.” She notes that studies indicate we anthropomorphize social robots and form emotional bonds with them. And she predicts that in the same way we have passed laws to protect animals because of our personal attachment to them, we may also create legal protections for robots because we have bonded with them.
But Darling also worries about desensitization and argues that we should pass legal protections for social robots because of the ethical implications. She writes in an email that if
robots and living things become muddled in people’s subconsciousness, there could be an effect on people’s behavior if they become accustomed to “mistreating” robots that move and otherwise behave in a lifelike way. Like if a child grows up kicking a robot dog, will they be more likely to kick a real animal? We don’t know the answer to this yet, but we know that it brings the violence in video games question to a whole new level. We’re very physical creatures. And in this case, we might want to prevent people from “mistreating” lifelike robots.
Her concerns have merit—research indicates that kids like to beat up robots, despite perceiving them as “human-like.” Encouraging kindness toward humanlike robots may encourage kindness toward actual humans.
Similarly, Weng advocates that we should give humanlike machines a special legal status he calls the “Third Existence,” basically providing them with rights akin to protections granted to pets. Weng notes that a person who injures your dog can be found liable for the cost of treatment and care, and he proposes that robots should be granted similar protections. “My main argument is that current laws do not help human beings to project their empathy while interacting with humanoid robots,” Weng told Tech Insider (emphasis added).
Weng’s and Darling’s efforts stem directly from the concern that our interactions with social robots and humanoid robots will desensitize us to violence against real people. Their proposed response—legal protection for certain types of robots—ends up “saving” the cute robots.
However, in focusing on laws that protect how we socialize with anthropomorphized robots, we need to make sure not to ignore plainer robots. They need legal protections, too. In fact, I have gone so far as to recommend that we should grant them limited legal personhood. It’s not because we should empathize with them—it’s because laws governing interactions with ugly bots could improve their utility and benefit to humans.
As any monkey with a camera and selfie stick can tell you, copyright protection exists only for works created by human beings. It certainly doesn’t cover works created by artificial intelligence programs, which enter the public domain. Creating certain intellectual property rights for creative pieces produced by that technology will incentivize programmers and designers to do more work in that field. It’s not clear that contracts signed by robots are legitimate, meaning delivery agreements with drones and purchases performed by retail A.I. might not be binding. Granting robots the right to enter and perform contracts will clarify that utilitarian, nonsocial, nonhumanoid robots will provide useful services in the economy—make deliveries, take orders, etc.—while minimizing uncertainty about their legal ability to do so. Clarifying that the First Amendment applies to robots will protect autonomous writing technology from overzealous legislators who are offended by what they produce. Requiring certain forms of robots—self-driving cars, autonomous delivery drones, etc.—to carry insurance provides necessary protection for owners and third parties alike from any potential liability.
The interactions I just described will frequently be dull. They won’t be cute. These machines will merely function; they will not invite emotional bonding. But thoughtful laws and policies can make these interactions more useful, letting A.I., autonomous devices, and robots help us in ways that social robots cannot. When we think about laws for robots, it’s important to protect the cute robot otters, but let’s not forget to provide some rights for the practical robot cows too.