In his 1984 film The Terminator and its sequels, James Cameron imagines a dystopic future in which armies of intelligent robots move with startling suddenness from positions of servility to utter and violent dominance, destroying civilization and driving humankind to the brink of extinction.
This, of course, is pure science fiction. There’s little reason to believe things will unfold that way. First, they would take all our jobs and wreck our economy.
This is the nightmare narrative of our future with robots and artificial intelligence. The utopian version of this tale—one accepted by many powerful people in industry and government—involves a progression in which we teach robots and AI, then they teach us, and finally we join with them, become one with them, and seize the reins of human evolution to strike out in a radical new direction. We emerge as hybrid beings, possessed with immense power and nearly unlimited knowledge.
These were the narratives that ran rapidly through my head when Joel Garreau, co-director of Emerge 2014, approached Jake Pinholster and me about creating a show with a robot.
Emerge is an annual festival at Arizona State University that brings together scientists, artists, engineers, and writers to imagine and design the future of the human experience. (Disclosure: ASU is a partner with Slate and the New America Foundation in Future Tense; Garreau is also the co-director of Future Tense.) Jake and I, both professors of theater at ASU, have created original works of performance for each of the previous Emerge festivals, using technologies ranging from architectural projections to humble Twitter accounts. When Joel approached us, Jake and I had already begun creating a new work for Emerge 2014; this one involved a pair of clowns and an aerialist. Adding a robot to the mix seemed an exciting challenge, both artistically and technologically.
A few weeks later, I walked into the Autonomous System Technologies Research & Integration Laboratory at ASU’s School of Earth and Space Exploration. There, I was introduced to Baxter, a humanoid robot created by Rethink Robotics. ASTRIL had acquired Baxter only a few months earlier to work on developing technologies for robot/astronaut interactions in space exploration. I figured if we could get the robot to interact effectively with clowns, astronauts would be a cakewalk. I looked Baxter up and down his (“His”? I guessed male—because patriarchy) red and black body, with its hulking arms and iPad-like face, then turned to Srikanth Saripalli, the director of ASTRIL and a roboticist at SESE.
“Can he juggle?” I asked.
“I … I don’t think so,” Srikanth said doubtfully.
“Can he fail to juggle?”
Srikanth grinned. “Oh, yes! Spectacularly!”
This realization sparked much of the work that followed, for failure is as interesting and valuable to an engineer as it is to an artist. And this is even more acutely true for clowns. A clown works in failure like Michelangelo worked in marble.
In the ensuing weeks, I worked with our clowns—graduate students Brian Foley and Chelsea Pace—to devise a series of performed metaphors that addressed the past, present, and future of human/robot relations. Our first question was, “What can this robot do?”
This is almost never an easy question to answer for new technologies, in part because, though capabilities are not unlimited, neither are they certain. One doesn’t so much discover capabilities as produce them. Or rather, one does both. This often involves transforming the technology itself, as well as the processes and means by which you engage the technology. And this is significantly what research in engineering means. It is largely the same in performance.
For instance, Emily McBryan, an undergraduate aerospace engineering student, designed and built two different hands for Baxter. After several failed tests, she constructed new scooplike parts for one of the hands to allow it to more effectively throw objects. Our sound designer Stephen Christensen worked closely with Sai Vemprala, the graduate research assistant who programmed all of the robot’s movements, to design and produce an intuitive interface that allowed us to control Baxter through an iPad. Faced with frustrating lag-time in lab tests, Stephen rewrote the control code several times, radically reducing the delay, and enabling Baxter to respond quickly to the fluid and rapidly changing situations of a live performance.
The theater culture in which I work is certainly different from the culture of science and engineering in which Srikanth and his students work. Yet this collaboration is not as unlikely as you might think. As I said to Srikanth, our superficial differences mask a deeper affinity. We both focus on performance: the performance of materials, technologies, processes, and systems. My theater collaborators and I are just additionally concerned with the performance of organic autonomous systems—namely, people.
The entire team learned a great deal from this collaboration. Srikanth and I hope to continue this research together, and we are pursuing funding to make that possible. Aside from the genuine and serious advances we made in robotics and control technologies, aside from what we learned about collaborative processes across fields, I came away from the project with at least two (rather more whimsical) insights.
Firstly, teaching a robot to pop and lock is more difficult that one might expect. Humans still do “the robot” better than actual robots. A comforting irony.
Secondly, a robot throwing rubber ducks into a clown’s pants is as comically sublime an act as I could wish for—though to make it truly worthy of MOMA-level art, we’d need to do it for, say, six hours.
And who needs dystopian or utopian tales when you have that?