The Authenticity Trap

Mark Zuckerberg thinks Facebook’s problems can be fixed with “authentic” speech. He’s so wrong.

Mark Zuckerberg gives a speech about free speech.
Facebook CEO Mark Zuckerberg leads a conversation on free expression at Georgetown University on Thursday in Washington.
Riccardo Savi/Getty Images for Facebook

In his speech on “free expression” Thursday at Georgetown University, Facebook’s CEO, Mark Zuckerberg, made an important assertion: Facebook has found that its “best” speech-regulating strategy is “focusing on the authenticity of the speaker rather than the content itself.”

Zuckerberg went on to describe the Russian content that roiled his platform in 2016 as “distasteful,” implying that it was mostly unobjectionable. The “real issue,” he said, was that the Internet Research Agency’s ads were “posted by fake accounts coordinating together and pretending to be someone else.” But the “real issue” with foreign election interference isn’t disguised identities or coordinated activity by foreign accounts. It’s the foreign interference that’s harmful, not a misrepresentation about who’s running the account.

The Russian interference example, which Zuckerberg and his executives have cited again and again, highlights the company’s efforts to justify and normalize what I call “authenticity regulation”: Facebook’s evolving set of policies designed to verify that you are who you say you are, by collecting all sorts of identifying information about you and monitoring your online (and offline) behavior.

When Facebook unveiled revisions to its Community Standards in September, “Authenticity” stood front and center as one of five important “values” guiding the platform’s regulation of content. A graphic embedded in Facebook’s Community Standards webpage depicts Authenticity as a woman shining a flashlight on her own face, while a second person watches her.

The other four values—voice, safety, privacy, and dignity—need little explanation—they all describe individual human rights. “Authenticity” is not like these other things. As I argue in a forthcoming paper, authenticity differs from these rights because it mainly serves Facebook’s own interests, not those of individual users, by advancing the company’s business model.

In everyday speech, “authenticity” brings to mind introspection and self-definition (as in: “be true to yourself”), following one’s heart, and being present and engaged in the moment. Psychologists have found that authenticity in this self-actualizing sense improves well-being—it’s good for you. (They’ve also found that people who have more power in life tend to feel more authentic.) In other words, authenticity is important primarily because it’s good for you, not because it benefits other people.

But this is not the kind of “authenticity” that Zuckerberg is talking about. Facebook uses “authenticity” to mean that you’ve presented only accurate details about yourself online, for the benefit of someone else. Facebook would say that someone else is the online community, but it’s really Facebook itself. Facebook pushes “authenticity” as a feel-good speech value, but authenticity provides significant business value to Facebook.

Facebook’s authenticity rules include its real-name policies as well as provisions in its terms of service that command users to provide only accurate information about themselves. They include its verification rules, which require anyone who wants to discuss “national issues of public importance” using Facebook’s paid tools to send the company sensitive personal information. Facebook’s decisions about what issues are of “public importance” are very significant, because they determine who has to provide the company with a passport or driver’s license image to be allowed to speak. For instance, the Washington Post found that verification requirements have placed extra burdens on speech that touches on LGBTQ issues because Facebook has treated all LGBTQ-related content as “political,” thus requiring verification. And Facebook deploys verification selectively: Under its rules, “poverty” is a political issue requiring speakers to verify their identities, but “wealth” is not.

Facebook’s authenticity practices also include product features that capitalize on users’ personal information, like the feature that allows an elected official to host a virtual town hall on Facebook Live, attended only by “authentic” constituents—people whom Facebook knows to live within the official’s jurisdiction. And Facebook’s authenticity regulation includes its systems of ad customization and microtargeting. These systems, which generate substantially all of Facebook’s profits, use your identifying information to “serve” you ads—they regulate what information you receive. Sen. Ron Wyden has called on Facebook to voluntarily suspend microtargeting of political ads for the 2020 election, a great proposal that Facebook will never embrace because it strikes right at the heart of the platform’s business model.

Authenticity rules help Facebook quantify and surveil users, fix fees for advertisers, measure user growth, and develop accurate machine learning. In order for its data systems to “learn” patterns of human behavior, for example, they must have accurate data inputs. Thus, someone who misrepresents his age to Facebook corrupts the company’s machine learning, because the system will attribute all of his behaviors to a younger (or older) person and glean false insights about human behavior from that attribution. That’s inauthenticity—an offense against the business model.

Increasingly, companies are policing users’ behavior as a measure of authenticity, looking for associations or breaks in patterns that raise red flags. They do this under the label “coordinated inauthentic behavior,” which sometimes sweeps “real” people into its tractor beam.

By making “authentic” identity a valuable commodity, Facebook encourages identity theft, because a stolen identity can be harder to detect as false. It also encourages an arms race between tech companies and identity thieves, including foreign nation-states. This not only increases the value of identity-verification services, creating profit opportunities for the same companies that contributed to the problem, but pushes companies like Facebook to form reciprocal relationships with law enforcement.

Sure, it would be nice if everyone you encountered online was who they say they are. And online fraud is a crime that requires enforcement action. But is a platform of 2 billion people, all operating with complete personal transparency, really possible or even desirable? And what are we willing to give up in order to let Facebook (and other social media companies) police our identities? The benefits of authenticity regulation for users remain elusive.

Facebook commonly elides the difference between authentic identity and authentic content, suggesting that people who present only true information about themselves produce authentic (i.e., good) speech. “You can still say controversial things,” Zuckerberg said at Georgetown, “but you have to stand behind them with your real identity and face accountability.” But it’s misleading to suggest that content from “authentic” speakers is particularly high-quality. Lots of people operating under their real identities spread false and misleading content, including @realDonaldTrump. Furthermore, recent research suggests that some people may be more likely to act abusively online when their real-world identities are known. After all, plenty of online provocateurs operate under their real names.

Authenticity regulation suggests that when speech is objectionable, it is because of the identity of the person speaking and not because of the content of the speech. (That’s what Zuckerberg was doing when he brought up Russian electoral interference in his Georgetown speech.) This value judgment differs from traditional notions of free speech, which acknowledge that some kinds of speech are both objectionable and protected from censorship.

At least Zuckerberg nodded to the ways in which authenticity regulation offers a valuable trade-off for the company.

“Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful,” Zuckerberg told the Georgetown students. As the Russian interference example shows, Facebook continues to struggle to understand the real nature of the harms caused by content that spreads on its platform. By focusing on authenticity, Facebook can avoid the pitfalls of content moderation, which have made the company a lightning rod for criticism, while continuing to develop a deep store of identifying information about its users, which can be monetized.

Ultimately, however, authenticity regulation likely offers the same opportunities to suppress viewpoints and manipulate debate as content regulation. It’s not much of a beneficial trade-off for us.

Disclosure: The author owns a small amount of Facebook stock.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.