Future Tense

The EU Is Trying to Decide Whether to Grant Robots Personhood

Humanoid 'Sophia The Robot' of Hanson Robotics answers questions during a press conference at the 2017 Web Summit in Lisbon on November 7, 2017. 
Europe's largest tech event Web Summit is held at Parque das Nacoes in Lisbon from November 6 to November 9.  / AFP PHOTO / PATRICIA DE MELO MOREIRA        (Photo credit should read PATRICIA DE MELO MOREIRA/AFP/Getty Images)
Hanson Robotics’ Sophia answers questions during a press conference at the 2017 Web Summit in Lisbon on Nov. 7, 2017. AFP Contributor/Getty Images

In 2015, an A.I.-powered Twitter bot did something a little out there—avant-garde, one might say. It tweeted, “I seriously want to kill people,” and mentioned a fashion event in Amsterdam. Dutch police questioned the owner of the bot over the death threat, claiming he was legally responsible for its actions, because it was in his name and composed tweets based on his own Twitter account.

It’s not clear whether tweeting “I seriously want to kill people” at a fashion event actually constitutes a crime—or even a crime against fashion—in the Netherlands. But assume for a second that it did. Who would be responsible? The owner? The creator? The user it was impersonating?

Under an ongoing EU proposal, it might just be the bot itself. A 2017 European Parliament report floated the idea of granting special legal status, or “electronic personalities,” to smart robots, specifically those which (or should that be who?) can learn, adapt, and act for themselves. This legal personhood would be similar to that already assigned to corporations around the world, and would make robots, rather than people, liable for their self-determined actions, including for any harm they might cause. The motion suggests:

Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.

Like corporate personhood, this status would be limited: Robots wouldn’t have the right to vote or marry (sorry, technophiles). But the proposal, currently being considered as part of the European Commission’s initiative on artificial intelligence, would make machines legal entities under European law, with accompanying rights and responsibilities—like the responsibility not to tweet facetious death threats. If you’re asking yourself how robots can “make good” on damages, don’t worry, they won’t own money—but they could be compulsorily insured, using funds they accumulate for their owners, Politico recently suggested.

It’s a forward-thinking look at the inevitable legal ramifications of the autonomous-thinking A.I. that will someday be upon us, though it’s not without its critics. The proposal has been denounced in a letter released April 12, signed by 156 robotics, legal, medical, and ethics experts, who claim that the proposal is “nonsensical” and “non-pragmatic.” The complaint takes issue with giving the robots “legal personality,” when neither the Natural Person model, the Legal Entity model, nor the Anglo-Saxon Trust model is appropriate. There are also concerns that making robots liable would absolve manufacturers of liability that should rightfully be theirs.

The proposal may be guided in some part by concerns about “black box” thinking: the idea that robots might someday decide to do things with motivations that are opaque and incomprehensible to us—who else to blame but the robot? But the critics argue that we are far from the point of needing this kind of law, technologically speaking. The letter signatories slammed the report, stating that the proposal is based on:

an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements.

But while the thinking may be flawed (or driven by “superficial understanding”), doing this kind of thinking is not without its merits. John Frank Weaver, the author of Robots Are People, Too and a regular Future Tense contributor, argues that we need to start thinking about the legal framework for our Westworld-like future before it’s too late. Weaver has written about what it means to give robots various aspects of personhood, including the right to free speech, the right to citizenship, and legal protections (even for the ugly robots). As you can guess from the title of his book, he himself recommends limited legal personhood for robots, including the right to enter and perform contracts, the obligation to carry insurance, the right to own intellectual property, the obligation of liability, and the right to be the guardian of a minor.

And while the Dutch Twitter bot didn’t follow through on its threat, it’s clear the potential for autonomous robo-harm is already on our hands, with numerous fatal self-driving car crashes in the past month alone. Robots may not be thinking for themselves yet, but they are certainly thinking. With great processing power comes great responsibility. Who—or what—holds it is still to be determined.

Also in Future Tense: Read a short story by sci-fi author Paolo Bacigalupi about a murderous robot and product liability.