Future Tense

In Defense of Telling Patients They’re Dying via Robot

A viral story shouldn’t keep doctors from connecting with patients, and their families, in far-away hospital rooms.

A patient sits on a hospital bed, talking to a doctor on a screen.
Not as bad as it looks. Photo illustration by Slate. Photos by Rick Madonik/Toronto Star via Getty Images and Getty Images Plus.

At 2 a.m. in February, I found myself speaking with the family of a dying man. We had never met before, and I had only just learned of the patient. As an ICU doctor, I have been in this situation on many occasions, but there was something new this time. The family was 200 miles away, and we were talking through a video camera. I was staffing the electronic intensive care unit, complete with a headset, adjustable two-way video camera, and six screens of streaming data.

The eICU at Emory University in Atlanta provides care by physicians trained in critical care medicine to a number of hospital locations within the large Emory system. It also provides coverage to some smaller hospitals, away from the large academic medical centers, that are not able to provide continuous on-site ICU physician staffing. The eICU functions 24 hours a day with nurses in the day and physicians and nurses at night and on the weekends. For physicians, interacting with patients and families through the small screen and without the benefit of touch is an unusual and sometimes uncomfortable feeling. As physicians, we are trained to use all of our senses to make sense of a situation, and the eICU does not allow the shared intimacy frequently needed in diagnosis and treatment. It can feel remote, and when wearing my headset, at times I feel more like an air traffic controller than a doctor. Never was that truer than when I was confronted with a dying man and grieving family 200 miles away.

Recently, a similar event occurred in a hospital in California. Unlike my moment, this screen and camera were on wheels. Like some sort of R2-D2, it rolled into the patient’s room and began.

According to CNN, Annalisia Wilharm, granddaughter of patient Ernest Quintana, 78, was sitting by his bedside in the ICU of the Kaiser Permanente Medical Center in Fremont, California.  Quintana suffered from chronic obstructive lung disease, a condition characterized by difficulty breathing. As it worsens, the patient can require the assistance of mechanical ventilation to stay alive. Quintana’s wife and adult child had left the hospital for some rest. That evening, a machine rolled into Quintana’s room with a video live link to a doctor explaining the terminal nature of Quintana’s illness and how the best option might be hospice.

Quintana died the next day, and his family has apparently not raised concerns about his medical care. But the family has complained about the fact that the doctor shared the prognosis via robot. “I think they should have had more dignity and treated him better than they did,” Wilharm said to CNN. The corresponding mea culpa from the hospital spokeswoman suggested an inclination to agree. Media coverage, too, appears to criticize what happened, with disapproving headlines using the phrase “robot doctor.”

But this was not an example of a robot doctor, which would suggest an artificial intelligence was responsible here. An actual, live person was simply communicating through a large rolling video phone.

I can understand why the animated and mechanical nature of the doc in the mechanical box was clearly startling. This is something I have grappled with, too. At 2 on that morning, my screen and camera were fixed to the wall, staring down at the patient with the swiveling camera eye controlled by me. I surveyed the room. I saw the patient on the bed, connected to the ventilator, and in the room, I counted three anxious family members. Slowly and calmly, I explained who I was and how the camera worked. I looked directly into my camera lens so that on the video monitor, my eyes would meet theirs. I acknowledged the strangeness of the technology and worked through the data in an unhurried way. After some back and forth, we all began to relax and accept the communication. Now we could get to it. I found I was still more than able to make a connection, provide information, and demonstrate compassion. They cried, as they should, when I told them their loved one would die. When I have done this in person, I retain the option of providing physical comfort with a hug or even a handshake. These are the natural inclinations shared in our common humanity. In the eICU, I am unable to provide my physical self and a family might interpret this as cold indifference. It is nothing of the sort. These are the limits of this technology.

Nevertheless, the California story misses the point. This is not a failure of technology, it seems. More likely, it was a failure to communicate via any method. Medical schools are bad at teaching how to deliver bad news. Patients often don’t know how to receive it, either. A doctor-patient relationship of trust can successfully occur over the phone and be bungled completely in a face-to-face encounter. We do not know the mind of the doctor, of what came before, or the mental state of the patient or his granddaughter. Absent that, this story tells us nothing about whether remote technology should be used to deliver this sort of news.

The medical profession can perhaps nearly entirely be seen as an exercise in communication. A capable physician is able to sort out the state of mind of the listener, but it takes skill. Some people are good at this; others are not. Optimizing the conditions for this interaction must occur before any conversation about sickness and health or about life and death. In this case, in addition to an explanation about who I was, what I do, and how the doctor-patient relationship is configured in the ICU, I needed to tell them about the eICU generally, including the camera, the headset, where I was, why I was there, and how they should interact with me. If a doctor skips this step and goes right to the medical information, the listeners might feel like they are receiving a data dump, not participating in a conversation. This is not magic. This is not art. It’s a technical skill requiring rapid assessment of the stakes and the players. It requires a nimble vocabulary. It requires ongoing verification that the patient and the family are absorbing the words, and their meaning.

In my case, we established a connection. The family understood and thanked me, genuinely, for my time and explanation. It was 2 a.m., and it occurred to me that these tragic stories seem to be, not uncommonly, things of the night. If I had not been there on the camera, no intensive care doctor would have been available to provide an explanation. The patient would still have died, but perhaps his family would have felt caught off guard. Doubt about the cause of death and uncertainty about treatment options can plague and crush a family forever.

It is likely that this California hospital was just trying to deliver ICU physician care in a way it believed would be sufficient. Difficult conversations are difficult in person and exceedingly difficult over the camera. I believe it is still possible with eICU-style technology to provide the necessary comfort, even when the conversation concerns death. It will, however, require a style and method of communication beyond what an in-person conversation provides. My own story was, I believe, a success of this technology because I took the extra time to overcome the constraints. Technology is the helper to the physician but not presently the replacement. If we allow the technology to strip away our common humanity, we will all be diminished as a consequence.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.