The biggest looming, but overhyped, threat to recent efforts to preserve your data privacy is the First Amendment. Not because the First Amendment and privacy are inherently in conflict—quite the opposite—but because the First Amendment may be the most convenient defense companies can offer up when seeking to avoid privacy regulations.
The First Amendment is often weaponized to undermine our privacy interests. In June, Maine enacted a data privacy law that bars internet service providers, such as Comcast, from using or sharing consumers’ sensitive data without consent. In February, a number of interest groups sued the state, claiming that the law violates the First Amendment. Even more brazenly, Hoan Ton-That, the CEO of Clearview AI, a company that sells the use of its facial recognition software to law enforcement, recently claimed that the First Amendment gives the company the right to scrape face photographs on public social media platforms. This claim not only ignores valid concerns about facial recognition technologies—their tendency toward discrimination, their use in pervasive location-tracking, including of activists or dissidents—but also gets the First Amendment wrong.
Both the Maine lawsuit and Clearview’s claim oversimplify and exaggerate what has admittedly been a historic tension between privacy and speech interests. The rights to free speech and privacy can sometimes clash, it’s true. But there is also a strong tradition of U.S. legal precedent recognizing their interdependence. The First Amendment does not always require the overturning of privacy laws. In fact, sometimes it supports upholding them.
One classic tension between free speech and privacy is what can be thought of as the “cat is out of the bag” doctrine. The Supreme Court limits the government’s ability to restrict newspapers’ dissemination of information regarding matters of public concern that the government has itself already released. In one case, Cox v. Cohn, the court found that where a rape victim’s name was publicly disclosed in a court record, the First Amendment restricted the state’s ability to ban its subsequent publication. Another case, Florida Star v. B.J.F., upheld the same principle when an alleged rape victim’s name was taken from a police report. Once out of the bag, in plain sight, the information was deemed surrendered by the government.
These cases, read in broad strokes, might suggest that no publicly available information can be regulated without raising free speech problems. But they’re at least in part about ensuring that the government is more careful with information itself before punishing subsequent disclosure.
The broader related myth is that there’s no such thing as privacy in public (irrespective of who first disclosed the information publicly). This provides fodder for claims like Clearview’s, that once a person’s face or other personal information has been made available online, or has been publicly disclosed at all, the First Amendment protects a company’s interest in gathering and disseminating that information.
This mythic principle would make it very difficult if not impossible for states to regulate the collection, use, and retention of biometric information (your “face print” or “face map”), as Illinois has done. Your face, after all, is your unique interface with the world, the permanent identifier you carry around with you in locations both public and private. Plenty of other identifiers deemed “personal information” in both existing and new privacy laws—your name, your birth date, your address—are also information that might readily be obtained through publicly available sources, and so presumably fair game if the fallacy were given force.
Upon closer scrutiny, however, this argument is far from convincing. First, these older cases involved newspapers, reporting on matters of public concern, such as judicial proceedings. While the First Amendment applies equally to speakers who are not journalists, the court has long made it clear that when evaluating speech claims, newsworthiness matters to its analysis. Recently, in Snyder v. Phelps, the court wrote that:
restricting speech on purely private matters does not implicate the same constitutional concerns as limiting speech on matters of public interest. There is no threat to the free and robust debate of public issues; there is no potential interference with a meaningful dialogue of ideas; and the threat of liability does not pose the risk of a reaction of self-censorship on matters of public import.
The information Clearview AI is gathering—biometric data of our faces from personal profiles on Facebook, LinkedIn, Twitter, and YouTube—usually isn’t a “matter of public interest.” It’s the exact opposite—extremely personal and private information being used by the government (through Clearview AI) to track, police, and control the populace.
We are cautiously optimistic that courts won’t fall for the simplistic arguments offered by companies trying to fend off new privacy laws, in part because the Supreme Court has recently expanded its understanding of privacy harms. Companies often ignore that the above cases do empower governments to enact privacy laws when they have an interest “of the highest order” in doing so—that is, when they can articulate significant privacy harms. Of late, the Supreme Court has recognized exactly the kinds of harms implicated by large-scale, ongoing surveillance enabled by technologies such as facial recognition, even when the surveillance is of “public” space. Such surveillance, the court acknowledged, reveals traditionally sensitive information such as your health (if you are recorded regularly visiting a doctor), your political affiliations (if you are tracked to a protest), your addictions (if you are tracked to an Alcoholics Anonymous meeting), and even your sexuality (if you are tracked to a gay bar). Additionally, surveillance over time reveals patterns in your behavior, sensitive inferences that you may not knowingly reveal at all.
A second trend in First Amendment law, the ever-expanding scope of what counts as speech, might challenge privacy laws in other ways. In its 2011 Sorrell v. IMS Health decision, the Supreme Court struck down a Vermont law that regulated the disclosure of doctors’ prescribing practices by pharmacies to pharmaceutical companies who used that information to then market their products to doctors. The court ruled that it was an impermissible regulation of speech. But it has never held that all data is speech (that is, that all regulation of data triggers First Amendment scrutiny).
There is a not insignificant chance that courts will consider most of the wave of new privacy laws (such as Maine’s) to be regulation of bargains struck between consumers and companies, not regulations of speech. In any event, classifying something as “speech” is only the first step in First Amendment analysis. Courts must still evaluate whether the purported impingement on speech is counterbalanced by a sufficiently compelling government interest. The harms at issue for most state data privacy laws are more significant than the harms discussed in Sorrell. (Often forgotten is that the privacy harms discussed in Sorrell were harms to the prescribing physician, not the affected patient.) And Sorrell was decided before the Supreme Court’s more recent recognition of big data privacy harms, discussed above—which the 9th U.S. Circuit Court of Appeals recently cited discussing the harms averted by Illinois’ biometric law.
As we’ve each underscored in our research reconciling the right to privacy with the First Amendment, the two are often interdependent. Where privacy regulations advance First Amendment interests, they are on stronger legal ground against First Amendment challenges.
Privacy is frequently a prerequisite for the enjoyment of our First Amendment rights of speech and association. Anonymity can serve as what Washington University law professor Neil Richards has labeled “intellectual privacy,” allowing citizens to develop, test, and potentially express new ideas and feelings. Anonymous speech has been repeatedly protected by the Supreme Court, which has recognized that without some modicum of privacy, controversial ideas may never be expressed for fear of negative backlash and censorship. Privacy, that is, can prevent individuals from conforming to majority views. Efforts to obscure your identity can serve as a form of expressive resistance, signaling disapproval of ubiquitous surveillance efforts—such as Clearview AI’s practices—that make it much harder to remain anonymous both to corporations and to law enforcement agencies. And, as the Supreme Court has recognized, anonymity can further First Amendment values even if our identities aren’t completely secret or are known in other contexts. The lack of privacy in one context doesn’t erase its First Amendment values in another context.
The privacy interests safeguarded by the Maine ISP law, which protects the content of user communications (among other data, including what they look at online), are just as clear. The companies suing Maine for preventing ISPs from selling users’ private data ignore that privacy in one’s communications is central to free expression and free association, as the Supreme Court has recognized. In Bartnicki v. Vopper, a case assessing the constitutionality of the federal wiretap law, the Supreme Court observed that “the fear of public disclosure of private conversations might well have a chilling effect on private speech.” In Carpenter v. U.S., the court recognized that pervasive, persistent surveillance even of noncontent communications information such as location information can chill “familial, political, professional, religious, and sexual associations.”
Put simply, privacy isn’t always the enemy of the First Amendment, as companies eager for a deregulatory approach to their privacy-infringing activities would have you believe. Laws ranging from LGBTQ anti-discrimination protections to public health laws requiring that women be provided medically accurate reproductive information have all been attacked as violating the First Amendment’s free speech guarantees. They clearly don’t. On the contrary, particularly when it comes to vulnerable actors in our society, regulations aimed at protecting basic privacy rights are often necessary to safeguard First Amendment rights.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.