Slate has relationships with various online retailers. If you buy something through our links, Slate may earn an affiliate commission. We update links when possible, but note that deals can expire and all prices are subject to change. All prices were up to date at the time of publication.
From The Empathy Diaries: A Memoir. Published by arrangement with Penguin Press, a member of Penguin Random House LLC. Copyright © 2021 by Sherry Turkle.
It’s June 2018 and I’m in Barcelona, a city in which I have little history. I’ve been a tourist here twice. Once I came to see the wonderful Gaudí buildings and once I visited friends with my daughter Rebecca, who was 10 at the time. Rebecca had torn a ligament in her ankle in a playground accident a week before our planned departure. But we took off for Barcelona anyway, planning to take taxis and eat tapas. My friends had young children. If we couldn’t do a lot of sightseeing, at least we would have the benefit of a new place, new food, deepening connection.
Over the weekend, we took a road trip to Barcelona’s PortAventura, a Spanish version of Disneyland. At the entrance, Rebecca was issued a wheelchair—her “invalid” status meant that everyone in our party automatically went to the front of every line, even the ones with two-hour wait times. We were thrilled by what felt like a guilty pleasure.
Twenty years later, these happy associations come back to me. I have finished the first draft of this book, and I text Rebecca my good news. We’ve been texting back and forth, working on details for her wedding, planned for the following summer.
I’m in Barcelona to give a speech about new trends in artificial intelligence. I’ve called my talk “The New A.I.—Artificial Intimacy.” Now machines are not content to show us they are smart; they pretend to care about our love lives and our children.
When computer toys first wowed children with their ability to play games, children not only saw the toys as “sort of alive” but actually changed how they talked about what was special about being a person. When they met computer toys, children saw people as special not because they were smart (these new machines were smart as well) but because they had feelings. Young children essentially described people as emotional machines. It seemed an unstable category. Even then, my computer-scientist colleagues dreamed of creating robots and screen avatars that could be our companions of the heart—machines with as-if feelings and as-if empathy. I wondered: Once people were in the company of these new “emotional” machines, the artificial ones, how would we distinguish ourselves from these pretenders? More important, would we want to? Would pretend empathy seem empathy enough?
I came up with this troubling formulation: We nurture what we love, but we love what we nurture. After taking care of an object, even one as simple as a digital pet that lived in a plastic egg and wanted to be fed and amused on schedule, children (and their parents) got attached to it emotionally. This finding did not have to do with the intelligence or empathic qualities of the digital objects that asked to be taught or tended. It had to do with the vulnerability of people. When machines ask us to care for them, we become attached to these machines and think that the machines care for us. “Pretend empathy” had an awesome weapon: the deep psychology of being human.
And now we were beyond human vulnerabilities and projections.
Now the machines were outright declaring their affection.
This is the original sin of artificial intelligence. There is nothing wrong with creating smart machines. We can set them to all kinds of useful tasks. The problem comes up when we create machines that let us think they care for us. “You are the wind beneath my wings,” says Siri in response to “Siri, I love you.” These “empathy machines” play on our loneliness and, sadly, on our fear of being vulnerable to our own kind. We must confront the downside of living with the robots of our sciencefiction dreams. Do we really want to feel empathy for machines that feel nothing for us?
As I prepare the final notes for my talk on the dangers of artificial intimacy, I suddenly remember a night in 1982.
My husband Seymour and I had gone to the Boston premiere of Tron with A.I. researcher Marvin Minsky. We were excited. The film depicted the mind as a society of programs—this was the theory that Marvin and Seymour were writing about! After the film, we stood outside the theater while Marvin regaled us with his ideas about our object minds. The film, said Marvin, was on the right track. Everyone should take their kids to this film and avoid more traditional fare.
In his mind, Walt Disney’s Bambi was the worst. I took the bait.
What was wrong with Bambi? Every kid sees Bambi. Marvin’s response has stayed with me for half a lifetime: “Bambi indoctrinates children to think that death matters. Someday we will conquer death by merging with computers. Such attachments—Bambi’s attachment to his mother, for example—will be unimportant. People need to learn to give that stuff up.” I knew Marvin to be a loving father and husband. But in his mind, attachment would only be an impediment to progress in a world where people and machines evolved together.
Marvin Minsky died in 2016. But I’m still fighting his idea, now more than ever part of the cultural mainstream, that it is good to have devices that can wean us from our dependency on one another. For Marvin, the burdens that come with human bonds were unnecessary and inefficient because an engineering solution was on the horizon—we are ultimately going to mate with machines or evolve into machines or become one with machines.
These ideas are seductive. Of course we want technology to bring us sharper wits and a cure for Parkinson’s. We like the idea that some kind of artificial intelligence can help monitor the safety of isolated elders. And then we are caught short. There is a red line—one I have seen so many people cross. It’s the line when you don’t want children to get attached to their mortal mothers because they should be ready to bond with their eternal robot minders. It’s the line where you take your child as your experimental subject and ignore her, registering her tears as data. It’s the line you cross when one of your classmates commits suicide by jumping out a window and you joke about the laws of physics that were at work in his descent. It’s the line you cross when you know that the car you manufacture has a design flaw and a certain kind of impact will kill its passengers. You’ll have to pay damages for their lives. What is the cost of their lives in relation to that of redesigning the car? This is the kind of thinking that treats people as things. Knowing how to criticize it is becoming more pressing as social media and artificial intelligence insert themselves into every aspect of our lives, because as they do, we are turned into commodities, data that is bought and sold on the marketplace.
At the very moment we are called to connect to the earth and be stewards of our planet, we are intensifying our connection to objects that really don’t care if humanity dies. The urgent move, I think, is in the opposite direction.
The evening before my talk in Barcelona, I thought about Marvin and Tron. I remembered that when Rebecca was small, I went out and bought her all the Disney movies I had seen as a child. Of course, I bought Bambi. In my home, there would be no shortage of stories with mother-child bonds.
And now Rebecca is 27. Only a few weeks before, she helped me choose my gown for her wedding. I had a favorite, but wasn’t the neckline too low? Studiously, patiently, my daughter stood opposite me, putting herself in my place, taking me seriously. My empathic girl. No, she said. The neckline is perfect.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.