Even if you aren’t interested in artificial intelligence, A.I. is interested in you. When you upload and tag a photo of yourself on Facebook, you help improve the facial recognition software the company uses to link faces to identities. Your photo might also end up with companies like IBM and Microsoft, which crawl the web for public images––from selfies with your friends to celebrities on the red carpet––to build facial recognition systems that they then market to law enforcement, border control, and private security companies. These systems, which are racially biased and unregulated, may be used on you by the police when you’re walking down the street. And now, Customs and Border Protection contracts companies like SITA to implement facial recognition for passengers boarding international flights, alongside similar plans from the Transportation Security Administration to roll out face scanners for domestic flights. The constant surveillance to which internet companies have acclimated their users is slowly making its way offline, where it becomes increasingly difficult to opt out of these systems.
This has multiple legal implications, as the ethical basis for so many of these data collection systems is that users have consented to the way companies often manipulate or sell their data. In an age where we TOS;DR—terms of service, didn’t read—clicking the “accept” button often gives companies carte blanche to use your private data, and it’s frequently impossible to know where your data will go. Like the tobacco companies that preceded them, companies like Amazon and Facebook can argue that you’ve made an informed choice: You consented to their terms of service and willingly uploaded the photos, so you can’t blame them for the results.
To protect consumers’ data online, the European Union and the state of California have begun to adopt regulations that strengthen a user’s right to be informed about and intervene in the processing of their personal data. Both the EU’s General Data Protection Regulation and the California Consumer Privacy Act require companies to ask for consent prior to data collection processes and give users the right to know what data will be used for what purpose. The proposed Commercial Facial Recognition Privacy Act introduced in the Senate has similar consent requirements for facial recognition data. Philosophers Luciano Floridi and Mariarosaria Taddeo—directors of Oxford’s Digital Ethics Lab, which collaborates with Google, Microsoft, and the European Commission to develop ethical standards—have also highlighted consent as a major component of an ethical framework for artificial intelligence. Consent, in other words, has become one of the major legal and ethical benchmarks for regulating data collection processes.
But consent isn’t an ethical rubber stamp. If anything, it allows those in power to avoid responsibility for its consequences by shifting the ethical responsibility to safeguard personal information away from the companies onto individual consumers. Many companies have chosen to comply with the GDPR by blocking all EU users; others bury users in a deluge of pop-ups that have large “accept” buttons but hidden opt-out options. These tactics provide the illusion of choice while suppressing actual alternatives—a phenomenon that also exists offline. Yet relying on consent for ethical reasoning about surveillance is so pervasive that people publish articles on how to jump through increasingly complicated hoops to opt out of airport face scanning instead of questioning why we never had a choice about whether the scanners were installed.
More importantly, opting out of data collection systems often means not being able to use online services. Refusing to participate in these systems often means staying off the internet entirely, which comes with massive social and economic costs. The question therefore isn’t whether you should opt out; it’s a question of whether you can afford to.
Opting out in the offline world is just as difficult, if not more. Good luck taking a walk through the park without getting caught on camera, where your only opt-out option might be to wear a Hunger Games–style getup to fool face detection algorithms. If you have a driver’s license or state ID, you might be one of at least half of all U.S. adults who are included in government facial recognition databases with no recourse. And at the airport, staff do not disclose the ability to opt out of face scanning. But refusing the biometric scans at airports often means that you face detainment and invasive searches.
Facial recognition technologies are already being integrated into policing practices that disproportionately target communities of color. As such, people of color might reasonably choose to opt out of training the system that will be used to surveil them with greater precision. Hypothetically, if enough people do this, then the training data sets will again skew toward lighter-skinned people. This leaves communities of color in a double-bind: Opt out, and the racial bias of a data set increases, leading to more soap dispensers that simply don’t respond to dark skin. Opt in, and have your image be used to fine-tune a facial recognition system that Amazon is trying to sell to Immigration and Customs Enforcement.
Even if you don’t consent to having your data used, companies and governments will simply turn to more vulnerable faces to populate their data sets. The very narrow choice to withdraw consent does not allow an individual to reject unjust uses of technology that will affect not only themselves, but their wider communities. A.I. for facial recognition surfaces racism that is already embedded within American surveillance and policing practices. Collecting more data or making users more informed about how their data is being used won’t fix that.
We aren’t saying that consent has no place in this ecosystem. But it shouldn’t be the only way we let people make decisions about data protection. Researchers like Sasha Costanza-Chock and nonprofit organizations like the Participatory Budgeting Project have also looked toward participatory approaches to data set creation, research, and analysis that involve community input at the outset, rather than seeking consent for opaque decisions after the fact. For example, the Make the Breast Pump Not Suck Hackathon at MIT attempts to rethink the design process by pairing technologists with community advocates in birth clinics, in-home maternal care programs, and peer counseling/support groups to collaboratively design new technologies from scratch.
Yes, it’s impossible to ensure that all voices are heard and accounted for—the final product will always be a series of compromises between different community stakeholders. But as the system stands, there are few—if any—participatory approaches to the creation and design of these algorithms. While these kinds of granular, in-depth collaborations are easier to achieve on a local level, it’s possible to imagine a world where this kind of public accountability and engagement scales up: What would an algorithm’s design look like if it were subject to public hearings and approval, like alcohol licenses or building permits, prior to deployment?
There is power in numbers. To A.I., any individual’s data is interchangeable. But linked with other photos in a data set, its capabilities are amplified. By framing data protection as an individual choice, companies hope to prevent us from collectively reflecting on how our information is used in society. In Somerville, Massachusetts, City Councilor Ben Ewen-Campen spoke against the common notion that privacy invasions are inevitable with technological change. “The community, activists, the government working together can actually shape this stuff,” said Ewen-Campen. “We don’t have to just sit back and take it.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.