Even if you aren’t trying to get pregnant, fertility apps can be useful for helping you remember when your last period was. Users can log their menstrual cycle rather than rack their brains (or write it down in a notebook they might lose or forget about) and get predictions on when their next cycle will begin. They can also track intimate information on their bodies and sexual activity, including weight, mood, and frequency of sex. It’s great.
Unless your employer has paid to get a copy of that data. In April, the Washington Post reported that Ovia, a fertility app, allows employers to provide accounts as part of wellness programs and to access an aggregate of the data employees provided. Employers may use this information to inform hiring, promotion, or pay decisions—which violates federal prohibitions. But that doesn’t mean it doesn’t happen. It can be especially dangerous in a workplace with few women, in which “de-identified” data provides little protection from discrimination.
Ovia is an example of employers seeking more information on their workers and expecting employees to help them via self-surveillance. Yet if you were to tune in to the roiling privacy debate on Capitol Hill, you’d hear little of this. Congress’ debate around passing privacy legislation has largely left out workplace protections. The current debate centers “consumer privacy,” but this distinction falsely creates dual identities with different protections for you the consumer and you the worker.
The surveillance of workers and the workplace is far from a modern phenomenon. Like racial minorities, immigrants, and religious minorities, poor and working people have long been disproportionately surveilled by their employers and the government. (This will be the subject of the upcoming Color of Surveillance conference at Georgetown Law on Nov. 7, a project I’m co-organizing.) For example, Henry Ford sent detectives to his factory workers’ homes. Investigators would ask employees questions and examine homes to evaluate their moral standing. Concerned about high employee turnover, Ford wanted his employees to conform to his social and moral expectations. The Pinkerton Detective Agency infiltrated mining communities and unions at the direction of union-busting owners. Detectives gathered information to disrupt organized action and disband unions.
From the 1590s to 1800s, English lawmakers monitored the poor who received financial assistance. The “Poor Laws” they enacted required paupers to work in exchange for benefits. The policies also carried mechanisms for social control: Administrators could use absence from church services and drunkenness to reduce or withhold support. According to historian Steve Hindle, one woman lost her pension after refusing to marry the man who impregnated her. The Poor Laws also relied on neighbors to report on one another.
But historically, surveillance had limitations. A detective could only keep track of so many workers at one time. It was expensive and resource-intensive.
Over time, surveillance has become more sophisticated, more granular, and more invasive. Employers have gone from using human informants to scanning employee emails or monitoring Reddit. In July, a Walmart employee was fired for posting confidential information about a new Walmart program on a subreddit used by other Walmart employees. Members responded by posting pro-union memes (Walmart is notoriously anti-union).
But surveillance goes much further than spying on communications. Rather than asking a foreman about employee productivity, an employer can pull up this information from an app—a new form of surveillance made possible with algorithmic managers. In contrast with previous methods of surveillance, algorithmic managers require workers to watch and report on themselves. The artificial manager stores and manages worker input, constantly collecting information. It knows, with precise detail, your location and how much work you’ve completed. Hotels give their housekeepers apps to track workflow. The app knows which room the housekeeper is in because the housekeeper tells it. Housekeepers tell the app how long it took to clean a room. Algorithmic managers also keep track of what orders warehouse workers have fulfilled and how many miles rideshare drivers have traveled. Employees often cannot disable these apps without retaliation—and in some cases, like ride-sharing and food delivery gigs, it would be literally impossible to work without the app.
As surveillance increases, workers are losing protections. Collective bargaining could give workers the opportunity to push back against surveillance, but few have this opportunity. At the height of union membership, 1 in every 3 Americans belonged to a union. Today, it’s only 1 in 10. Additionally, the workers most affected by algorithmic managers are dispersed and have particular difficulty organizing collective action. In May, for instance, Uber and Lyft drivers organized a strike to raise objections to the Uber initial public offering. However, without a centralized place to communicate, some drivers learned about the protest during or after the strike.
Workers need protections from expanding surveillance and control. Yet, if current congressional debates bear any fruit, the average American will soon have more protection from tech giants and telecommunications companies than from the people who sign their paychecks. The unauthorized or harmful uses of geolocation data affect both consumers and workers. However, only harms to consumers, not workers, are being addressed. After Vice reported that bounty hunters were purchasing consumer geolocation information from cellphone companies (two were allegedly involved in a triple murder earlier this year), Federal Communications Commission Commissioner Geoffrey Starks and Oregon Sen. Ron Wyden called for agency action or policy change to end the sale of this data. Yet employers can directly access their employees’ geolocation data through mobile apps, and there’s no congressional action at hand. Workers should have privacy over their data in the same way consumers do—or else they’ll continue to find themselves coded into a corner.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.