Good news: The full-on replacement of humans is nowhere near as close as we have been led to believe. This is true even though algorithms can do an astounding range of things that were once viewed as exclusively human work: They can write news articles, identify faces, edit Wikipedia pages, provide financial advice, discover new drugs, and coordinate meetings among colleagues, to name only a few. But they don’t work all by themselves.
This is a crucial but often overlooked point in the debate around algorithms and the future of work: Most human jobs will not be replaced but rather reconfigured in the near future. We absolutely need to worry about the long-term implications on the demand for human labor and how this will affect the economy. But if we only focus on the question of whether and when humans will be replaced, we miss the impact algorithms are already having on work and the opportunities to make choices, as designers and consumers, about how algorithms can disrupt or enforce existing power dynamics in the future.
A recently released report from McKinsey’s Global Institute makes a similar point. Even as it suggests tantalizing statistics about the potential efficiency of automation, it points out that fewer than 5 percent of jobs, as in occupations, can be entirely automated in the near future. This stands in contrast to the widely reported 2014 Oxford study that warned that 47 percent of U.S. employment was at risk from automation. The McKinsey report concludes that the future effects of automation in the workplace should be examined from the perspective of job tasks, rather than jobs in and of themselves.
For example, in their new study “Can Robots Be Lawyers?” Dana Remus of the University of North Carolina School of Law and Frank Levy of MIT contrast the popular argument that algorithms will soon replace lawyers with empirical evidence of what technology can currently accomplish and how the work of lawyers is changing. Based on analyses of hours spent by lawyers in a range of firms, they conclude that only certain tasks can be automated. Even automated document review requires significant human time and expertise to create the fine-tuned classification systems that direct the algorithms. Remus and Levy acknowledge that automation will have an impact on the number of lawyers employed, but they argue that dramatic impacts are unlikely due to technical limitations of machine intelligence as well as social expectations of a lawyer’s value. Communicating with and advising clients, which constitutes a significant portion of lawyer work, remains firmly in the human domain.
Yet whether work is considered most suited to humans or to machines is a function of power dynamics and social hierarchies, not just technical capabilities. For instance, at a fast food or casual dining restaurant “communicating with and advising” customers is seen as an appropriate task to delegate to computers. However, technology scholar Tamara Kneese recently observed that the while iPad ordering systems being placed in airport terminals are supposed to improve service through algorithmic recommendation, they actually end up creating more work for waitresses and frustrating customers.
Working with—or within—algorithmic systems affects not only skills and perceptions of expertise but also the conditions of work itself. The on-demand economy is able to function in large part because of automated and predictive aspects of algorithmic systems. Scheduling low-wage shiftwork, particularly in the service and retail sector, is increasingly “optimized” by algorithms that use real-time big data sets to predict sales patterns and create the most efficient ratios between workers, consumers, and material resources. This has allowed corporate profits to increase as labor costs decrease. However, as a New York Times feature in 2014 brought to public attention, this mode of inconsistent and last-minute scheduling has left the lives of many workers unbearable.
While the potential to exploit labor is far from new, algorithmically managed work creates new conditions that disadvantage and disempower workers. Existing regulations intended to protect workers do not always apply. Such is the case with debates over the status of Uber drivers. Because on-demand ride companies like Uber argue that they provide only a platform for drivers and passengers to connect, they are not responsible for drivers as employees, thus cutting immense labor costs from their business model. It may be true that not all drivers would wish to be employees in the traditional sense, but the California Labor Commission, among other voices, has argued that on-demand drivers should not be classified as independent contractors. Recent research by Alex Rosenblat of Data and Society (where I also work) and Luke Stark of NYU demonstrates how Uber algorithms exert significant control over drivers that is analogous to forms of labor management. Increasing attention to the working conditions of on-demand drivers has exposed how old categories are falling short of protecting the welfare of many workers.
But algorithms can also change people’s work lives for the better and provide improved services. Collision detection algorithms have the potential to save thousands of lives on the highways of the United States, including truck drivers, sales people, and others whose work keeps them on the roads. Humans and computers working together tend to be better than either humans or computers working alone. But figuring out how to create optimal human-algorithm teams is not easy.
This problem has been evident for more than two decades in the aviation industry. Psychologists and human factors researchers have long argued that there can be serious costs to delegating tasks to algorithmic systems. While automation is generally assumed to relieve humans of menial tasks, freeing them to think about more important decisions, this has proven not to be the case. More free time creates opportunities for skills to degrade and minds to wander. Studies have documented that pilot awareness generally decreases with increased automation. Consequently, more and more researchers are advocating for rethinking how complex, highly automated, and autonomous systems are designed to interact with humans.
Laborer or manager, white- or blue-collar worker, the changes that accompany the widespread use of algorithms in work life will touch everyone. If we think about algorithms and the future of work as more than a discussion of human replacement but instead one of redefinition, we are better positioned to begin unpacking the social implications of algorithms and the future of work.
It’s easy to forget that technological innovation does not unfold along a predetermined path. There is no destiny. Instead technologies develop in fits and starts through hard work and accidents, with many choices to be made along the way. As consumers, designers and commentators, we can advocate for and design algorithmic systems that optimize for both employer and employee and that respect the dignity of every kind of job. The future of work is filled with more than robots. It is filled with people who will create the conditions and possibilities of human-algorithm teams, which can make work lives better or worse.
This article is part of the algorithm installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on algorithms:
- “What’s the Deal With Algorithms?”
- “Your Algorithms Cheat Sheet”
- “The Ethical Data Scientist”
- “How to Teach Yourself About Algorithms”
- “How to Hold Governments Accountable for the Algorithms They Use”
- “How Algorithms Are Changing the Way We Argue”
- “Which Government Algorithm to Cut Fraud Works Best—the One Targeting the Poor or the Rich?”
- “Algorithms Aren’t Like Spock—They’re Like Capt. Kirk”
- “What Do We Not Want Algorithms to Do for Us?”