We live in the age of interactive celebrity. Both stars and fans buy into it: stars, because it hugely benefits wallets and careers; fans, because interactivity gives both a sense of intimacy and cultural ownership of the stars they follow. Some kinds of interactivity are sanctioned, like official Twitter accounts. Others, like most fan fiction, are not.
Figuring out what to do about unauthorized celebrity imagery isn’t new. But with the democratization of robotics and 3-D printing, the question now has a new dimension. Last week the story broke that a Hong Kong designer, Ricky Ma, made a robot that looks an awful lot like Scarlett Johansson. Fans make drawings featuring celebrities; they write stories; they create sculptures. Is making a robot any different?
There are a number of reasons we allow celebrities to protect some kind of right in their faces. The first is privacy. The commercial use of one’s face or name without permission can be thought of as a privacy harm, founded in autonomy, dignity, or personhood. Nobody wants to be misrepresented as endorsing products or viewpoints with which one vehemently disagrees. This concern goes to questions about personhood—about how people construct and exist as individual identities.
Many states, however, frame things a little differently. They characterize the “right of publicity” as a propertylike right that allows celebrities, not private individuals, to reap the benefits of their hard work establishing a career, or even as an incentive for celebrities to put themselves into the public eye. Kim Kardashian, this narrative goes, deserves a legally enforceable right in her image, because she has worked very hard to put herself into the public sphere. If anyone could use Kardashian’s image, in the absence of a right of publicity, we might, heaven forbid—and perhaps counterintuitively—see a little less of Kim Kardashian.
The problem with too strong a right of publicity is that it allows a celebrity to squelch public dialogue. Many states consequently recognize a newsworthiness exception to the right; newspapers can write news stories about KimYe without permission. From the perspective of a fan base, however, this and other narrow exceptions aren’t remotely broad enough.
Celebrities don’t create themselves; fans participate in creating celebrities’ value. Celebrities shouldn’t be rewarded for their fame at the expense of fans who help create it. This is the classic conundrum of the right of publicity: On the one hand, people probably deserve some legal protection in their identities. On the other, the stronger that protection, the bigger the impact on free speech and popular culture. There has been a recent slew of video game lawsuits on this issue: Should Gwen Stefani, Lindsay Lohan, and NFL players be able to control the use of their faces in video games? Or should video game designers have creative leeway in making art that reflects and comments on our experience of the real world? So far, courts have not been sympathetic to the commercial video game designer.
Robotic Scarlett Johansson could potentially change this conversation. Courts already employ a trope of “involuntary servitude” when analyzing the use of a particular person’s face in right of publicity cases. According to these courts, allowing use of a person’s face without permission is like forcing that person to work at a job, harming their dignity. The more photo-realist the face, the more dignity-harming the appropriation.
There’s a good argument that the real Scarlett Johansson’s personhood will be more affected by a robotic embodiment than by a fan’s drawing. Evidence suggests that people respond to robots as apparent social actors. Robots can feel like living agents to their human companions. Soldiers empathize with military robots; owners of Sony Aibo robot dogs mourn their mortality; and people refuse to “hurt” their Pleo robotic dinosaurs. A robotic Johansson appears to be acting in certain ways out there in the world, not just endorsing something the actress didn’t want to endorse, or appearing in a context where she didn’t want to appear. The more realistic the robot doppelgänger, the more blurry the lines between felt fact and fiction, the more harmful the robot actor is to the real actress.
This argument could lead to giving stronger property rights to celebrities in their images, with respect to robots. This would shift U.S. law by placing less of an emphasis on hard work, and more of an emphasis on threats to personhood and dignity. Or it could lead to the conclusion that people shouldn’t make robots that look like other real people, at all. Human slavery is often held up as the quintessential illustration of limits on property ownership, both inherently destructive of personhood and fundamentally immoral. What about robotic slavery, wherein the Scarlett Johansson robot feels for all purposes like the human actor you cannot legally enslave?
This question may seem ridiculous. Robots are not human beings; you cannot enslave a robot. In fact, the origin of the word robot is the Czech term for forced labor, from a play about cyborg workers that ends in a robot uprising. Robots are famously meant to do labor that is dirty, dangerous, and dull. To restrict what humans can do to robots is to reduce the reasons we should have robots to begin with.
Yet Kate Darling at the MIT Media Lab has argued that what humans do to robots tells us about humans themselves. Darling proposes that we may want to legally protect robots to some extent, because if we treat them in inhumane ways, we become inhumane. (Sinziana Gutiu has made a similar argument addressing sex robots.) This argument can only be stronger when a humanoid robot is crafted to look like a particular, recognizable, living person. If we place no restrictions on what can be done to robots, we may instead want to place restrictions on how closely they can resemble living human beings.
What about the creative rights of the person building the robot? When addressing free speech rights, the Supreme Court has tended to protect creativity and has been unsympathetic toward the idea that fiction can cause harm. But is a robot with somebody’s face unharmful fiction? The court allows lawmakers to ban child pornography, but not purely virtual child pornography, reasoning that fictionalized pornography doesn’t actually harm a real child. Lower courts have split, however, on whether it is permissible to arrest somebody for photoshopping an image of a real child’s face onto adult pornography. No child would have been actually harmed in the making of the image. But some courts see a reputational and dignitary harm to a real child associated with porn. Because of the innate responses we seem to have to robotic actors, such dignitary arguments take on even more force when we’re talking about a robot rather than a static image. Robots may thus provide the testing ground for the extent to which U.S. courts believe in protecting personhood.
The issues go beyond right of publicity. Robotic ScarJo raises all kinds of questions about gender, biases, and ethics in robot design. If current A.I. interfaces like Siri and Cortana are any indication, “robotification” is more likely to happen to women. (Although, yes, Siri’s default is a man in some countries—likely reflecting differing biases about gender roles.) It’s important to ask ourselves why.
What if instead of making the Scarlett Johansson robot without the actress’s permission, a robot manufacturer legally licensed her face and trotted out millions upon millions of ScarJos to serve as personal assistants? Is this ethical? Robot designers know we respond to anthropomorphic features, including indicators of gender. They study the ways, both for good and for bad, that robot design can affect or elicit human behavior. In one study, men were more likely to donate money to a female robot. In another, users disclosed more or less information about dating, based on whether a robot was male or female. People have gender biases, and bring them into their interactions with new technologies. This is no doubt true of race, as well; most robots currently have a Eurocentric design.
We should be having real discussions about the ethics of the design of such interfaces, from questioning embedded gender and racial biases, to worrying about consumer protection when ScarJo bot asks you, in her husky voice, to buy her an upgrade. Scarlett Johansson the robot shows us that technological design is never neutral. It comes embedded with somebody’s values, and it’s worth asking whether those values are desirable.
These conversations are only just beginning. In many ways, robots are not different; they’ll traverse the legal landscape, raising many of the same old legal problems. But in some instances, they force us to return to first principles, and to see different kinds of harms where there weren’t recognized harms before. Scarlett Johansson robot isn’t just creepy. She’s a harbinger of complicated legal and ethical change.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.