Future Tense

Robots Are People, Too

If courts are going to treat corporations like humans, they should do the same for robots.

DHL microdrone
When drones deliver your packages, they’ll need the right to enter and perform contracts, among other freedoms and obligations.

Photo by Wolfgang Rattay/Reuters

In Burwell v. Hobby Lobby, the Supreme Court recognized that corporations are entitled to religious freedom under the First Amendment, much as actual people are. This further expands the “legal personhood” of corporations. They have the right to own property, enter into and enforce contracts, and make political expenditures under the First Amendment. However, corporations also have some of the obligations of real people: They can be sued and held liable under civil and criminal law.

These rights and responsibilities stem from federal and state laws, which recognize that corporations are “people” in certain situations. The idea of “legal personhood” existed in America originally to promote the work of the country’s first corporations: banks, insurance companies, water companies, and transportation companies. At the time, these corporations were viewed as providing essential public services. They needed to have some of the rights and responsibilities of real people to provide those services.

I wouldn’t disagree that we’ve gone too far in considering corporations people under the law. But the initial reasoning for corporate legal personhood—treat them as legal persons so they can be beneficial—is still sound. The reasoning is also sound for robots.

We are beginning to see autonomous technology and artificial intelligence that we will interact with as we would with other people. We’ll talk to our self-driving cars, rely on drones instead of the FedEx guy to deliver packages, and use original media and content (like recipes) produced by machines. To help ensure that our interactions with these robots are beneficial and occur as intended, we need state legislatures and Congress to pass laws that grant limited legal personhood to these types of technology. After all, if we are dealing with robots like they are real people, the law should recognize that those interactions are like our interactions with real people. In some cases, that will require recognizing that the robots are insurable entities like real people or corporations and that a robot’s liability is self-contained. In the same way you cannot sue a person who is nearby an accident if he did nothing to cause the accident, you should not be able to sue the owner or operator of an autonomous drone who did nothing wrong while the drone was autonomously operating. At the same time, victims of drone accidents need to have recourse to recover for their injuries, and the drone’s insurance provides them with that.

Similarly, when you use a grocery delivery drone and its software to determine your food needs, make your order, and deliver it, the store will actually have very little interaction directly with your order. When there’s a problem with your order, from a legal perspective, your claim will be against the drone and its insurance, so you will need to have a contract with the machine. Otherwise, you’ll need to sue the store to get to the drone, despite the retailer’s relative innocence. At other times, like when a drone is acting in the place of a delivery person, the law will need to adequately recognize that the drone is an agent for the retail company and the contracts it forms are binding against the company.

Any legislation seeking to grant legal personhood to robots should use those rights and responsibilities as tools that permit robots to help or improve the lives of real people. With that in mind, we should extend the following freedoms and obligations to robots.

  1. The right to enter and perform contracts. As I just noted, many of our interactions with robots will be economic. They will deliver our Amazon orders and arrange our groceries order. When the drone drops off our kids’ Curious George Blu-ray collection and when the grocery store software creates and buys our weekly shopping list, we will engage in small, contractual transactions. All the parties in that transaction need to be able to rely on those contracts. If, instead of Curious George, the Amazon drone delivers a very disturbing DVD about cosmetic testing on monkeys, the family that has signed for the package must be able to rely on that delivery contract with the drone when it complains to Amazon. Similarly, the person using supermarket software to plan and buy grocery lists cannot later refuse to pay, claiming that the software was legally incapable of making a contract. By giving robots the right to enter and perform contracts, we ensure that they will provide useful services in the economy while minimizing questions about their legal ability to do so.

  2. The obligation to carry insurance. Some states have already moved in this direction as they have passed legislation governing autonomous cars. Legislation that treats each individual autonomous car as an insurable entity protects the car owner if liability is held entirely by the car. In other words, if legislation requires a self-driving car to have its own insurance and prevents plaintiffs from suing the owner, that would incentivize real people to buy autonomous cars because they will not be personally liable for the car’s self-driving accidents. The car becomes a separate insurable being that potentially provides a faster insurance payout to victims while protecting the owners from frivolous lawsuits.

  3. The right to own intellectual property. Or at least, the right to be recognized as the creator of intellectual property. U.S. intellectual property law only recognizes real people as the creators of intellectual property. It does not recognize that artificial intelligence can be an inventor or author. Any intellectual property created by robots and programs will enter the public domain. In order to incentivize the creation of machines that can create art, literature, music, etc., Congress should amend patent and copyright law to acknowledge that robots and programs can be inventors and authors. However, given that the current intellectual property laws are designed to protect the huge investments of time real people make as inventors and writers, there is little need to provide full IP protection to machines churning out 500 new novels in a morning. Rather, machine-created IP should receive lesser protection, perhaps for 10 years, that the inventor of the machine can hold and enjoy, before the invention or work enters the public domain, when everyone else can use it for free.

  4. The obligation of liability. There will be times when the robot itself could be liable rather than its owner or operator. For example, if the errant Amazon drone above were autonomous, and it—rather than a human worker—were somehow responsible for delivering the disturbing monkey DVD, the drone could be a named party in the suit, rather than Amazon. In that case, the mandatory insurance described above would pay a settlement or court order. Looking at the effect on all robots, this would give relief (possibly much faster relief) to victims who have been hurt by the robots while also permitting the robot operators and owners to calculate their liability in the event the robots cause damages.

  5. The right to be the guardian of a minor. In the not-too-distant future, robots will watch our kids. There have been technological advances in this area already. However, in our legal system, a child is always in someone’s custody. Kids pass from their parents’ custody to their school’s custody, to the babysitter’s custody, back to their parents again every day. If parent leaves a child with a robot nanny when she runs errands, who has custody of the child? Should it be the parent, even if she is not home but has left the child with a responsible robot nanny? The robot babysitter will likely be insured as a separate entity, per No. 2 above. Assuming the machine has been well designed, tested, and vetted, it makes more sense for the robot to become the guardian and assume that liability while the parent is out. The robot has been designed for that task, and the parent is not actually around to properly watch the child. Making the artificially intelligent nanny the guardian encourages manufacturers to properly design the robot to function as the guardian and to encourage the parents to try, trust, and adopt the technology.

These examples are fairly straightforward aspects of legal personhood that we should extend to robots. As this technology develops, there likely will be more rights and responsibilities that we will want robots to have in order to maximize their contribution to society. However, I have a hard time seeing religious freedom among those rights. Google car’s performance will be unaffected by its ability to worship the Flying Spaghetti Monster.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.