There’s No Such Thing as “Robot-Proofing”

Educators promise to keep students safe from professions at risk of automation. That’s impossible.

Three robots working at a conference room table.
PhonlamaiPhoto/iStock/Getty Images Plus

Last December, entrepreneur Amin Khoury gave Northeastern University’s College of Computer Science a $50 million gift. The money was slated for programs that would help new graduates compete in a marketplace increasingly dominated by artificial intelligence and automation. The university’s press release touted, “As the global economy adapts to the influence of artificial intelligence … Northeastern is empowering humans to be agile learners, thinkers, and creators, beyond the capacity of any machine.” The school, like quite a few others, is reimagining itself as an incubator for skills that are difficult to automate: creativity, imagination, mental flexibility. Indeed, Joseph Aoun, Northeastern’s president, literally wrote the book on this. His 2018 Robot-Proof: Higher Education in the Age of Artificial Intelligence tells us, “A robot-proof model of higher education is not concerned solely with topping up students’ minds with high-octane facts. Rather, it refits their mental engines, calibrating them with a creative mindset and the mental elasticity to invent, discover, or otherwise produce something society deems valuable.”

To robot-proof ourselves, then, is to foster those qualities that are somehow uniquely human, and thus harder for machines to mimic. It’s an appealing idea, and unsurprisingly, it’s generating a lot of interest. In the pages of the New York Times, Alex Williams worries that his young children will end up working for robots one day or, worse, “might not work at all, because of them.” He goes on to list a variety of jobs—from radiologist to airline pilot—that are likely to be performed by A.I. Then, trying to put his anxieties to rest, he collects a variety of predictions from experts about which jobs will survive. They tell him, not always consistently, that work emphasizing creativity, empathy, interpersonal communication skills, and manual dexterity are relatively robot-proof. That’s echoed in a 2018 report in which researchers for the World Economic Forum predict that “creativity, originality and initiative, critical thinking, persuasion and negotiation will retain or increase their value.”

Even Michelob’s Super Bowl commercial assures us that while robots will outperform us at running, golfing, and cycling, they still can’t savor a nice brew. A distraught robot looking longingly into a bar where happy patrons sip Michelob Ultra signals that, thank God, enjoying beer (or maybe even the very ability to enjoy) is robot-proof.

In short, robot-proofing is quickly becoming one of those phrases, and concepts, that gets thrown around a lot, a term that is easy to use but much harder to make sense of. For one thing, the idea that there is a set of skills that are immune to technological intrusion is wildly ahistorical. The story of technology is the story of making machines do what people think machines can’t do. We’ve been told that humans will never fly, that train and space travel are impractical, that computers will never beat a chess grandmaster, that algorithms can’t play complicated games like Jeopardy! or Go and that driving is too complicated a task for a machine to succeed at autonomously. Precedent simply does not support the existence of skills that are, by nature and inherently, “robot-proof.” If there is a hard limit to the growth of machine learning, it is probably related to the pace at which computing power can grow. The exponential increase of computing power that Gordon Moore posited as a plan for Intel many years ago is starting to plateau. (Some even argue that it has already come to an end.) The laws of physics, rather than the intrinsic nature of this or that skill, will likely be responsible for an eventual slowdown in the growth of automation.

But even if we choose to ignore the history of technology, there are difficulties with the idea of robot-proofing. First, economic incentives are aligned against it. Automation is attractive because it saves money. The dream it represents—automating the means of production so that plaintive, unreliable, easily tired, and disease-prone humans are taken out of the picture—is so lucrative that it will incentivize companies to work on automating even the most complex tasks. And even if we end up with a set of skills that are immune to such efforts, we would have to start rewarding the people who have these abilities for robot-proofing to make practical sense. In other words, if empathic care and manual dexterity, to take two examples, turn out to be robot-proof, we would need to change our social arrangements so that jobs that emphasize these skills (social work, early childhood care, physical therapy) are valued, encouraged, and well remunerated.

Many jobs in the current economy—mobile app developer, social media manager, cloud computing specialist—did not exist 15 years ago. Do we know what the labor market is going to look like 15 or 30 years from now? Do we have a grasp on what the jobs of the future are going to be? And if we don’t, how can we know what skills will be crucial for performing them? Simply positing creativity and cognitive flexibility as key capacities is question-begging. Finally, even if we were able to pin down what the robot-proof skills are, and establish that they are key to performing the jobs of the future, are we sure these skills can be taught? Do we know who can teach them? Universities are making brisk business from adding creativity and innovation modules to their curricula. Chinese universities are looking to American scholars for advice on fostering students’ creativity. But getting someone to pay for something is not the same as successfully delivering it. Is there good data that supposedly robot-proof skills such as creativity, empathy, and cognitive flexibility can be taught? Honed? And, independently, who is best positioned to impart and hone them? Are there reasons to assume that universities will be any good at this?

The preoccupation with robot-proofing is, more than anything else, indicative of our anxieties about technological displacement. It is certainly not a coherent policy agenda for protecting ourselves from the impact of automation. On the Mass Pike, Prudential Financial has a billboard assuring customers that “robots can’t take your job if you’re already retired.” The advertisement is funny and dark in equal measures—dark exactly because it lays bare our anxiety about being made redundant and suggesting there’s not too much we can do about it. Perhaps our time is better spent figuring out what a world without work looks like: how we will pay to sustain ourselves, where we will find meaning and self-worth after we can no longer find them at work, how we can develop our moral and social skills when we can no longer practice them on our colleagues. As usual, our capacity to create remarkable machines outpaces our ability to think about the world they herald. We are probably going to fail at robot-proofing. We don’t have to fail at living meaningfully alongside the robots.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.