Future Tense

The Real Free Speech Problem on Social Media

A Twitter alert says "Get the facts about mail-in ballots" below a tweet from President Trump.
This little blue exclamation point has created huge waves on social media. Justin Sullivan/Getty Images

There is a free speech problem online, but President Donald Trump is not the victim. His May 28 executive order was clearly intended to dominate social media companies. It was a tantrum in legalese, in response to Twitter’s mild fact-checking of his lie about nonexistent voter fraud—one of the few categories of lies that digital platforms say they take seriously. While incoherent and spiteful, the order has already done constitutional damage by retaliating against a private company for its views. This way lies the evisceration of speech rights and skittish platforms prostrated to power.

When President Richard Nixon wanted to stop CBS News and the Washington Post from covering Watergate, he tried to use the Federal Communications Commission to strong-arm the media companies by threatening their broadcast licenses. (The Post’s then-parent company also operated TV stations.) The rule of law and respect for the First Amendment put an end to that gambit. Trump’s retaliation against Twitter, again invoking the FCC, is an even more blatant power grab that capitalizes on confusion about the roles of digital platforms and the government when it comes to free speech. While Twitter is a private editor and can label, remove, or screen speech at will—something the Trump administration itself has relied on when blocking followers—the White House is constrained.

No one wants a digital platform to be an “arbiter of truth”—the taunt Facebook founder Mark Zuckerberg used to chide Twitter over its election fraud fact-checking and presumably to excuse his own company’s decision to allow false political advertising and ignore many significant offline harms. But the fact is, digital platforms make content choices all the time. Facebook itself decides what is true when it removes false pandemic information and commits itself to certain platform “values.” A platform that was truly passive would almost certainly have a lot more porn and extremism of the sort that have made 4chan and 8chan notorious. In addition to their terms of service and values, the major digital platforms all prioritize content according to proprietary algorithms that decide what people get to see.

Neither the president nor anyone else has a constitutional right to speak on or to be amplified by these platforms. At the same time, people must be able to speak and to access credible information, especially during a pandemic, protests, and elections. But platform algorithms that prioritize engagement are not too interested in what’s true or who gets to be heard. Disinformation and harassment flourish, while credible information struggles to penetrate. Public officials have been largely exempt from even the milquetoast rules calling out mendacity and incitement on the platforms. Individuals and entities with the most power are the least accountable to platform rules.

Trump’s executive order would exacerbate online power dynamics by unconstitutionally leveraging the power of the government to decide who deserves privileged treatment. It would enlist federal agencies to decide which platforms and which content is immune from liability.

The victims of platform power are average citizens who are pushed toward division and conspiracy, and who are denied real news as Google’s and Facebook’s ad practices drain local journalism of its lifeblood and algorithmically depress its salience. Citizens have no idea in many cases who is pushing messages at them or how personalized propaganda is tailored to them. While digital platforms say they want to promote meaningful conversation or “community,” they have never owned up to their role as our new media gatekeepers. And they fail to provide transparency about their surveillance and targeting of users, their algorithmic recommendations, and their choices about fact-checking, promoting, demoting, or removing disinformation.

Twitter, at long last, is taking steps to protect election information and depress threats of violence. The company finally placed a screen over a Trump tweet glorifying a possible violent response to riots. This ensures the message can still be seen by those who want to view it but adds a warning label as useful friction and restricts retweeting and viral distribution. The platform is also being transparent about its decision-making process via the label explanation and public comments.

Trump’s new executive order would pressure platforms to look the other way on disinformation emanating from the powerful. It urges the FCC to reinterpret Section 230 of the Communications Decency Act to deny platforms immunity from liability if the agency deems they are not worthy or that specific corrections or down rankings of content are inappropriate. It also enlists the Federal Trade Commission to determine whether the platforms are acting inconsistently with their terms of service and to consider complaints of “bias.”

The exercise of broad government discretion to punish platforms for moderation is just the kind of tyrannical power over speech that the First Amendment forbids and that Congress legislated against. If this weren’t enough, the executive order also instructs the attorney general to convene like-minded state attorneys general to investigate platforms. The order would create an America where the government evaluates and punishes editorial judgments about hate speech, harassment, and disinformation, pushing platforms to become either unmoderated free-for-alls or gated communities for the powerful.

Digital information platforms should ignore the executive order, and not cede their private responsibility to reduce harm. Twitter should double down on what appears to be a risk-based approach to content moderation and fact-checking—focusing on actions that pose the biggest threats because, for example, they call for violence and achieve viral spread. And since widely distributed disinformation and incitement pose more risk than content that reaches few people, the companies should change their practices to contain rather than boost conspiracy theories and outrage. Superspreaders of disinformation deserve more platform scrutiny.

If this moment proves anything, it is that what is said and heard should not depend so heavily on a decision by Jack Dorsey or Mark Zuckerberg. We desperately need more diversity and distributed power among digital platforms. As long as we have oligopolies as media gatekeepers, they should be responsible for taking steps to reduce risks.

As an example, perhaps the family of the former staff person to MSNBC Morning Joe host Joe Scarborough should have the right to sue Twitter for promoting Trump’s tweet claiming falsely that the former staffer was murdered. The right could apply at the point at which that tweet becomes sufficiently viral to raise a presumption of harm. A risk-based approach to platform liability would align algorithmic reach with responsibility—messages with more reach get more scrutiny—and would also more easily scale, since the focus is narrowed.

Policymakers serious about protecting free expression online should empower users by updating existing law and regulations. They could start with consumer protection laws to combat fraudulent ads and manipulation. They could apply civil rights rules to prevent harassment in the digital public square and adopt new privacy rules to end the surveillance and exploitation of digital platform users based on personal data.

Policymakers could reduce the flow of personalized propaganda, funded by dark money and delivered through micro-targeted ads, by restricting ad targeting, creating standardized cross-platform searchable ad databases to provide visibility into who is sponsoring ads, and establishing know-your-customer procedures to bring dark-money funding into the light.

Just as critical as reducing the viral flow of lies and incitement is the task of increasing the availability of credible local information. During the recent demonstrations, it has been difficult in some cases to find out about curfews and closures, in part because the United States has lost 2,100 local newspapers since 2004 and half its newsroom jobs since 2008. The lights have gone out completely in news deserts across the country.

We need a fund for local journalism—a public good that the market clearly is not sustaining— possibly financed by digital platform ad revenue. Eligibility could be limited to outlets that follow journalistic codes of practice (for example, transparency, fact-checking, and corrections), possibly relying on organizations such as the Trust Project coalition of news media that aims to help consumers assess the quality and credibility of journalism or NewsGuard, which rates websites for accuracy and transparency. Funds could be used more broadly for a civic information infrastructure that ensures public access, especially to information about and from government is not dependent on digital platform choices.

The bottom line is that the online free speech problem is not an issue of political bias, but instead a problem of power—power that platforms have over users by virtue of their surveillance practices and oligopoly status. Trump is not the victim of this system, but one of its principal beneficiaries. The victims are those without amplification networks, those who are targeted for harassment and disinformation, and the general public that deserves a place for civil discourse. The victims are truth and democracy.

The policy fix is not to give government or platforms more power to make opaque, arbitrary, content-based decisions. What we need are different business models and policies that support users in protecting themselves, including requiring that platforms provide transparency, take responsibility for viral spread of harmful content, support personal freedom of mind, and encourage news in the public interest.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.