In June, the California Labor Commission ruled in favor of classifying Uber driver Barbara Ann Berwick as an employee and not as an independent contractor. But the battle over ride-hailing apps continues to rage as companies, governments, activists, and incumbent businesses all seek to shape how a new generation of companies will be regulated. A class-action suit with potentially even bigger implications continues to wind its way through the federal courts, with Uber resisting every step of the way. All this has continued to take place on a backdrop of violent protests, accusations of gender bias, and revelations about less-than-savory techniques of message control.
But the battle isn’t about Uber, Lyft, or any of the other hot ride-hailing startups leveraging similar technology. Increasingly, what underlies the debate over the so-called sharing economy is a nascent, bigger battle about how society wants machines coordinating and governing human activity. These apps don’t match and route people by hand. Instead, software and underlying algorithms make these technologies work. Companies throughout the “sharing economy”—like Postmates, Handy, and TaskRabbit—all depend on the use of machines to match, sort, and assign tasks effectively at massive scale.
To date, the industry has used language that portrays itself as a mere facilitator. As Uber CEO Travis Kalanick told Wired in 2013, “[Uber is] not setting the price. The market is setting the price. … We have algorithms to determine what that market is.”
So: Is the Uber algorithm really a reflection of the marketplace? The stakes here are high. If Uber is merely a system humbly facilitating a relationship between supply and demand, then it supports the argument that Uber does not exert the kind of control over drivers that would deem them employees in the eyes of the law.
However, if Uber is more than simply a platform for allowing buyers and sellers of transportation to connect, then it may exert a kind of control that renders its drivers effectively employees. At its core, the debate over Uber drivers as employees is about the relative power of the algorithm.
Part of what is so difficult about Uber and many “sharing economy” platforms is that the lived experience of these applications so closely reflects a marketplace. The app connects drivers with riders, with a fluctuating price that seems to correlate with demand. You rate the performance of the driver as you would an independent seller on Amazon or eBay. The app, in short, looks like a market and quacks like a market. But companies like Uber and Lyft merely adopt tropes of a marketplace. The apps’ user interface suggests a reality that doesn’t exist in practice.
In a real marketplace, supply responds directly to the pressures of demand. This isn’t the case with Uber: According to a forthcoming paper by researchers Alex Rosenblat and Luke Stark at Data & Society (where we work), the supply of drivers is instead mobilized to meet predicted passenger demand, as through surge pricing.
Drivers are shown a map of “surge zones,” which ostensibly reflect the demand for rides in different parts of the city at a given time. While this is how the company frames it, it actually isn’t the case in practice. According to its patents, Uber generates surge based on the projected demand of riders at some point in the future. When it works, this system produces low latency—a rider requesting a car can get one quickly. When it doesn’t, drivers can spend precious time and gas in a neighborhood with no or slow demand. The suppliers get to see only what a system expects the state of the market to be, and not the market itself. (Update, July 28, 2015: For more on Uber’s “phantom cabs,” read Rosenblat’s piece on Motherboard.)
Demand is also walled off from supply. When you open the Uber app as a rider, you see a map of your local pickup area, with little sedans around that appear to be drivers available for a request. While you might assume these reflect an accurate picture of market supply, the way drivers are configured in Uber’s marketplace can be misleading. According to Rosenblat and Stark, the presence of those virtual cars on the passenger’s screen does not necessarily reflect an accurate number of drivers who are physically present or their precise locations. Instead, these phantom cars are part of a “visual effect” that Uber uses to emphasize the proximity of drivers to passengers. Not surprisingly, the visual effect shows cars nearby, even when they might not actually exist. Demand, in this case, sees a simulated picture of supply. Whether you are a driver or a rider, the algorithm operating behind the curtain at Uber shows a through-the-looking-glass version of supply and demand.
What the company has produced is a mirage of a marketplace—an app experience that produces the sensation of independent riders and drivers responding to the natural fluctuations of supply and demand. But a look underneath the hood reveals a system that intermediates and influences more than it facilitates free exchange.
This mirage has effectively confused the debate by allowing the companies to adopt the mantle of a passive marketplace. Uber has persistently characterized itself in its suits as simply “a software application … that permits riders to arrange trips with nearby transportation providers,” implying that it is the users who “arrange” the rides. But in reality, it is Uber that does much more than “arrange”: It sets the price, coordinates the trip, and has the power to exclude both riders and drivers.
Hiding behind the algorithm has also created negotiating leverage for the companies. Perhaps hedging the risk from the wave of legal challenges that would render their workers employees, some representatives of the industry support the creation of a third category—something between employee and contractor. (Sen. Mark Warner and Hillary Clinton have both voiced interest in this proposal.)
Creating a middle ground does make a good deal of sense. To use the language of legal scholar Adam Kolber, the employer-employee distinction is a good example of a “bumpy” law since it only permits two extreme states, one with low costs to the platforms and another one with considerable costs. This structure creates all sorts of distortions, not least of which is that small variations in the specific behavior of platforms might result in radically different legal outcomes. That might raise risks and hinder experimentation with new forms of work and employment.
Machines’ ability to coordinate the work of many people at low cost could reorganize the workplace in novel and better ways. To that end, the technology that we see at play in ride-hailing apps might indeed hold the promise of flexible work, financial independence, and much of the other technological advantages trumpeted by boosters of the “sharing economy.” But the question is not whether a middle ground makes sense in some abstract sense. The key issue is whether these technologies, as currently constituted, necessitate a middle ground.
That’s a much less clear proposition. As many have depicted more evocatively, companies like Uber and Lyft continue to exert a great deal of control over drivers, though they may use novel systems of employee discipline and monitoring. While the specifics have shuffled around, the balance of power between company and worker has in large part remained the same.
If the “sharing economy” were to take advantage of a third category, it should have to make significant changes. Categorizing workers in the middle zone between employee and contractor ought to require ride-hailing technologies to create work conditions that match the truly novel kind of employment they aggressively market to drivers. The reality of command and control, in short, would have to change to meet the promises of a flexible and free market mirage.
What the ride-hailing experience shows is the extent to which the behavior of these artificial intelligence systems can diverge significantly from the trappings they adopt. Similar mirages are cast elsewhere throughout the “sharing economy” and even in the design of our social platforms. They downplay the responsibility of the platform designer, masking the more active role these technologies play in the sectors they exist in.
As the uses of artificial intelligence continue to broaden, society will increasingly confront questions around the power these technologies can and should have. As we move toward regulation, we need to question the narratives offered by companies and make sure that policy reflects reality.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.