If you’ve heard Anthony Bourdain speak even a few sentences aloud, it’s impossible to read his writing without hearing it in his own voice. In Morgan Neville’s documentary portrait Roadrunner: A Film About Anthony Bourdain, Bourdain describes how he would roll out of bed in the morning and head straight for his keyboard without pausing for so much as a cup of coffee. His writing was a fluid extension of his life, and his life was sometimes inextricable from his writing. His longtime collaborators describe how, even in the recording booth, Bourdain would relentlessly cross out and rewrite his own voice-over, in part to purge from it any of the clichés of TV narration that wouldn’t feel true to himself—or at least to the version of himself he was presenting.
The intensely personal connection with that voice may be one reason why some of Bourdain’s admirers reacted so strongly to the news that Neville had used a digital simulation of it in Roadrunner, using artificial intelligence to extrapolate from hours of his actual voice recordings. As Neville told both GQ’s Brett Martin and the New Yorker’s Helen Rosner, he worked with a software company to create audio readings of three passages that Bourdain wrote but never spoke out loud, at least one of them taken from a personal email never intended for anyone but its recipient. (Neville told Rosner, “You probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know.”) Bourdain’s friend David Choe, an artist and self-described fellow addict, begins to read the email aloud, and midway through, the voice fades into the Bourdain simulacrum’s: “My life is sort of shit now,” the faux-Bourdain says. “You are successful, and I am successful, and I’m wondering: Are you happy?”
Roadrunner doesn’t announce its use of this technology in any obvious way, although the cross-fading technique isn’t used elsewhere in the movie, and even viewers who aren’t especially attentive to the nuances of documentary craft might puzzle over how there’s a recording of Bourdain reading aloud a previously undisclosed email. But it’s a pretty big leap from wondering where the audio came from to assuming that the movie must have employed cutting-edge technology to create a facsimile of a dead man’s voice—or at least, it would have seemed like one until last week.
Although it published two days after the GQ piece broke the story, it was the New Yorker’s account of Roadrunner’s use of the voice-simulating technology that triggered an intense backlash, perhaps because of Neville’s almost gloating refusal to identify the other places in the movie where the technology is used, or perhaps because of his quip, “We can have a documentary-ethics panel about it later.” While a formal panel has yet to be convened, an ad hoc one erupted on Twitter, with many documentary filmmakers either defending Neville’s decision or suggesting that the reaction was overblown.
“Way overblown,” A.J. Schnack, a documentary filmmaker and the founder of the annual Cinema Eye Honors for nonfiction filmmaking, told me when I asked him to expand on his thoughts. To Schnack, the outrage “made it so clear to me, and depressingly so, that people don’t understand how documentaries are made, and they really don’t understand how documentaries are edited.” The Emmy-winning, Oscar-nominated director Liz Garbus pointed out that “Franken-Bites”—quotes assembled in editing using parts of different sentences—are “common practice,” although she also added that the “rules of engagement” between filmmaker and audience have to be clear.
That’s the crux for Robert Greene, the director of movies like Actress and Bisbee ’17 that highlight the constructed nature of documentary filmmaking. “You need leeway to make things clear, emotionally clear, factually clear, and the complexity is that clarity sometimes comes from creative manipulation. That’s the thing that a lot of people who don’t work in documentary don’t understand. But every time you edit a line, it needs to be thought through deeply. You have to give the audience a way to understand what they’re seeing. That’s the contract. If the film had made it clear we’re creating these words because we fantasize about the idea of him saying these words, if it was clear that this was a projection of a fantasy, that’s fine. But faking his voice is a betrayal of the contract.”
Greene, who is in the middle of finishing the sound for his current documentary, points out that documentary-makers often have their subjects rerecord dialogue, whether that means doing supplementary interviews or just asking them to repeat a line without a stammer. But Bourdain can’t consent to the use of his words in Roadrunner, let alone to the simulation of his own voice, and Neville’s explanations have only muddied the waters. He told Martin that he had gotten permission from Bourdain’s “widow and literary executor,” but one of Bourdain’s ex-wives, Ottavia Busia, tweeted that she was “certainly NOT the one who said Tony would have been cool with that.” Neville clarified to Rosner that the use of an A.I. simulation had been part of his initial pitch, and Busia told her, “I do believe Morgan thought he had everyone’s blessing to go ahead,” but that she had chosen to “remove myself from the process” of making the movie early on, “because it was just too painful for me.” In other words, she was aware that Neville intended to explore the use of the process but had no idea he’d actually done it.
The articles that sprouted up in the wake of Neville’s revelation tended to zero in on the ethics of using A.I. in the context of a documentary. Rosner’s follow-up to her initial piece quotes a specialist in A.I. who says that the backlash against the vocal simulation stems in part from the way that it violates the audience’s parasocial relationship with Bourdain, the one-way bond formed between artists and people deeply touched by their work. But strangely few of them focused on the content of the one passage we know for sure that Roadrunner manufactured. Documentary filmmakers bend reality all the time, especially in the editing process. For instance, a shot of a person entering a building might precede a scene set in a different building on a different day. Most of these elisions are harmless, because while they’re not scrupulously accurate, the things that they create the impression of happening actually happened. Neville doesn’t put words in Bourdain’s mouth, at least by his own account. (Without knowing the other places the A.I. is used, it’s impossible to verify one way or another.) But the simulation isn’t just reciting Bourdain’s words; it’s giving a performance, one over which the filmmakers and their software designers had total control. It’s not destabilizing the parasocial relationship so much as amplifying it, removing the interlocutor so that Bourdain is reading his email directly to us, converting a private communication into a public declaration. Bourdain didn’t leave a suicide note, but the movie comes perilously close to providing one for him.
Greene and Schnack have both confronted the ethics of making movies about subjects who died by suicide, Schnack in his Kurt Cobain portrait About a Son, Greene in Kate Plays Christine, about the newscaster Christine Chubbuck. For Schnack, the intensity of the response is bound up in the way Bourdain died. “I relate it a lot to Cobain,” he says. “People feel like they have this personal connection with both of them, and after they commit suicide, that connection is maybe even greater. You feel the empathy within you rise, and you want to identify with them and protect them. That’s something that is clearly happening here.” Had the voice-simulating technology been used on a different subject, Schnack says, “I don’t think we’d be debating it to the extent that we are. If it was someone people didn’t know, or didn’t have that connection to, maybe some people would be upset about it, but other people would be like, ‘Oh, they’ve brought them back to life.’ ”
But for Greene, synthesizing Bourdain’s voice is part of the drive to explain Bourdain’s suicide—a drive that films can explore, as both Roadrunner and Kate Plays Christine do, but ultimately have to resist. “We’re desperate to turn suicide into narrative, because we need to make order of it,” Greene says. “The audience is asking you to give them space to heal, and to emote and to find some catharsis in the loss of a human life that really mattered to them. And if you fake that—there’s no other way to look at it—I’m not sure it’s ethically justifiable. That reality still does matter.”
Schnack points to the use of deepfakes to disguise interview subjects in David France’s Welcome to Chechnya as a sign that nonfiction filmmakers will continue to find ways to incorporate these technologies into their art—and he stresses that it is an art. As a journalism school graduate, he says, “I’ve made documentaries that were journalistic in nature and documentaries that are not, and it concerns me when people want to make a confluence between the two. We are using tools to create the truth, as we saw it, when we were making the film. You need to make it feel like you felt.” But while he stops short of condemning Roadrunner’s A.I., he does admit to some discomfort with it. “I think it creeps people out,” he says. “It creeps me out.”
Greene imagines an alternate version of Roadrunner that takes us inside the creation of the synthetic voice, where instead of blurring the transition, the movie highlights the attempt to make sense of Bourdain’s life, using the pieces he left behind. For him, it’s not the tool so much as the transparency with which it’s used—or the lack thereof. “We understand that movies are selling us the idea of authenticity,” he says. “But you still can’t lie.”