Joaquin Oliver died in the 2018 Parkland shooting, but recently, he urged people to vote in the 2020 election. Oliver’s parents used A.I. to have their dead son encourage people to vote for officials who support gun control, as an extension of the nonprofit they run, Change the Ref. Technologists 3D-printed Oliver’s image and created a video of him speaking out against gun violence, which his parents could take to protests around the country. In the video, Oliver’s likeness says “I mean, vote for me. Because I can’t.”
The video of Oliver, titled “Unfinished Vote,” used deepfake technology from Lightfarm Studios. For deepfakes—images and videos generated using A.I.—of celebrities, influencers, or politicians, or more typical public figures, the production team would usually have thousands of images and videos with which to train the A.I. But in the case of a teenager who was more famous after death than during life, the technologists didn’t have much material to work with. Instead, they created a single image of his face using three different photographs. Still, the video is convincing enough. The uncanny valley effect comes after the fact, once you realize that the young man in the video who is speaking to you is not actually speaking.
On Thursday, shortly before Halloween and in the wake of a much derided, viral tweet exposing her immense wealth and privilege, Kim Kardashian West reacted to Kanye West’s surprise gift to her: a hologram of her dead father. In an Instagram post, Kardashian West wrote, “For my birthday, Kanye got me the most thoughtful gift of a lifetime. A special surprise from heaven. A hologram of my dad. It is so lifelike and we watched it over and over, filled with lots so tears and emotion. I can’t even describe what this meant to me and my sisters, my brother, my mom and closest friends to experience together. Thank you so much Kanye for this memory that will last a lifetime.” The Robert Kardashian hologram, which also relied on deepfake technology, delivered a special 40th birthday message to his daughter and also repeatedly dubbed Kanye West a genius.
Using deepfake technologies to render dead performers, from Tupac to Michael Jackson and Whitney Houston, is not a new practice. But, as Alyx Gorman recently noted in the Guardian, such deepfake resurrections have typically been used for money-making ventures, such as performances. In the case of Robert Kardashian, the deepfake hologram was intended as an unlikely gift to conjure an intimate connection and personal message. Reviving the dead through such technologies is connected to more mundane questions about inheritance, as well as what it means to grieve when the traces of the dead are intermingled with those of the living.
One fantasy associated with data is that it can approximate a human personality or possibly a human soul. If data, in aggregate, can fully know, understand, and imitate a person when they are alive—through information gathered through self-tracking wearables, smart homes, social media behaviors, and ambient data gleaned through GPS and services like Square—then maybe A.I. can somehow capture and extend that person’s essence after they die, relying on past information to make predictions about what they might say or do in the future. Digital remains contain an imaginary afterlife that specifically builds on assumptions about the supposed objectivity and reliability of the digital format, despite its ephemerality and malleability. The examples of Oliver and Kardashian reveal fantasies and discomforts regarding the extension of human life through digital technologies. What does it mean to resurrect the dead, and not only bring them back to life to communicate with them, but to speak through them?
Digital communications are often meant to be short-term, real-time, and immediate, but through what I think of as “platform temporality,” they can be collected and preserved, and potentially passed on to loved ones and future descendants. There is also some amount of anxiety attached to the capacity for data to live on past people’s physiological selves; it is hard to control data during life and it is nearly impossible to do so after one’s death.
Living on through social media accounts is one thing, but actually using A.I. to replicate a person’s personality in perpetuity is something different entirely, especially when it comes to deepfakes. Deepfakes pose problems for human content moderators and audiences as well as for platforms. Aside from issues sorting out real content from artificially produced content, there are fears regarding the loss of control over one’s own image. Deepfakes are often framed as a potential problem in politics, where they are associated with disinformation campaigns intended to skew public opinion. Even unconvincing cheapfakes can go viral. But one of the earliest and most extensive applications of deepfake technology has been in pornography. TikTok creators and other influencers may find themselves the victims of deepfake pornographic images, just as more traditional celebrities have experienced. Not only do they emulate a person’s appearance or communication patterns, they move, shake, and gesticulate as they would in life. Deepfake pornography—or, on the end of the spectrum, necromancy—is about the simulation of intimacy and simultaneously threatens the loss of control over one’s likeness and one’s person. Impersonating the dead can be as much of a violation.
Numerous startup companies promise people a small piece of necromancy, and control over their own posthumous communications. If not an elaborate hologram that would cost hundreds of thousands of dollars to make or a deepfake video relying on hours of labor, then a small token that, by projecting someone’s personality into the future, using uses A.I. to safely bet what that person, alive or dead, would do in a future circumstance—how they would vote or what they might buy. I’ve written about some of these concepts and startup companies elsewhere. With numerous companies, from the defunct Intellitar to the transhumanist-backed LifeNaut and newer companies like Replika and Eterni.me, there is a sense that through training A.I., chatbots can fill in for the dead, possibly speaking on their behalf. But, as with more sophisticated deepfake technologies that resurrect the dead, there are ambiguities about who, exactly, is speaking and for what purposes.
In 2006, Nature, the preeminent science magazine, published a short work of science fiction by a neuroscientist named David Eagleman. Eagleman has been profiled by the New Yorker, and his book of short stories about possible afterlives was a New York Times bestseller. But in 2006, Eagleman also quietly launched a startup company named for his short story, which he called Death Switch. While the startup had a practical purpose and shared many attributes with other digital estate planning companies, its connection with Eagleman’s short story hints at possible future scenarios. Death Switch, the company, advertised that emails may be sent to loved ones up to a year after your death and that potential uses include getting the last word in an argument or revealing secrets never spoken in life. The company looked like so many other digital estate planning startup companies, allowing people to communicate with loved ones from beyond the grave. The loftier aspects of the company, with its plans to replicate people’s personal preferences beyond their physiological deaths, were not yet in place.
In the dimly recognizable future world described by Eagleman in the short story “A Brief History of Death Switches,” programmers implemented the “death switch” so that dead individuals could bequeath their passwords, final wishes, or darkest secrets to their loved ones. Even those who were long dead could send automated but personalized messages to living family members and friends. Dead people continued to purchase items on Amazon according to their cataloged tastes, thanks to algorithms that mapped their personalities well into the future. After some time, the dead “pretended they were not dead at all. Using auto-responder algorithms that cleverly analyzed incoming messages, a death switch could generate apologetic excuses to turn down invitations, to send congratulations on a life event, and to claim to be looking forward to a chance to see someone again soon.” Eventually, everyday social life and physical existence gave way to “a sophisticated network of transactions with no one to read them: a society of e-mails zipping back and forth under silent satellites orbiting around a soundless planet.” Capturing the predictable patterns of human life through algorithms ultimately blurred the distinction between the living and the dead.
At the end of the tale, the narrator notes that when aliens come to Earth, “they will immediately be able to understand what humans were about, because what will remain is the network of relationships: who loved whom, who competed, who cheated, who laughed together over road trips and holiday dinners.” Because of this accumulated data, all of the “planet’s memories survive in zeroes in ones.” Rather than an individual’s personal memories, humanity’s collective legacy is in fact an accumulation of affective exchanges and communicative traces. This possible future echoes the contemporary cultural fantasy surrounding big data, which posits that if we gather enough information, this will be a sufficient proxy for the self as well as for the collective.
While many people would be horrified by the idea of an uncanny valley version of their dead loved ones or themselves, some technologists are enchanted precisely by the indecipherability of the living and the dead. The problems posed by deepfakes are actually these technologists’ deepest wish: that blurring the boundaries between who is dead and who is alive will make it impossible to tell the difference between the two.