Future Tense

How the Biden Administration Can Tackle Social Media Regulation Without Chilling Free Speech

Step one: Stay away from the political black hole of Section 230 reform.

Biden gestures with his arms while speaking at a podium. Harris is seated behind him onstage.
Joe Biden with Kamala Harris in Wilmington, Delaware, on Aug. 12. Toni L. Sandys/Washington Post via Getty Images

As people across the U.S. finished Thanksgiving dinner, the man who refuses to accept his impending eviction from the White House threw a temper tantrum on social media. After thousands of Twitter users mocked him with the hashtag #DiaperDon, Trump accused the social media company of fabricating trending hashtags and endangering national security and once again called for an end to Section 230, the federal law that empowers internet platforms to set and enforce their own content rules. He is now threatening to veto the annual defense spending bill unless it includes an unrelated provision that rescinds Section 230.

Advertisement

Trump complains of censorship when Twitter and Facebook fact-check and label his posts as misleading, or when they restrict or remove his rants containing blatant disinformation about the voting process. But his Thanksgiving tantrum was at least honest: He wants free speech for himself and his supporters—but not for his critics. Now that election recounts and lawsuits have failed to overturn the election result, conservatives like Sen. Ted Cruz are baselessly blaming “big tech censorship” for Trump’s defeat. They make these claims despite the fact that, as is well documented, conservative speech is thriving on Facebook and election disinformation is abundant on YouTube.

Advertisement
Advertisement

Trump’s supporters are painting the incoming Biden-Harris administration as a cabal of censorious liberal elites who are pushing Big Tech to stifle conservative speech. Regardless of whether such claims have any basis in provable fact, the accusations are likely to intensify after the inauguration—and they will be used to undermine the legitimacy of the new administration’s efforts to address urgent crises like COVID-19 and the economy.

Advertisement

The Biden-Harris administration can neutralize such attacks by articulating a clear vision of free speech and civil rights in the digital age that transcends partisanship. We need a road map for protecting the rights of all Americans, regardless of how they voted in 2020. Otherwise, the cause of free speech will continue to be weaponized—even if cynically and hypocritically.

To accomplish this, the new administration should appoint a bipartisan commission including people from the public, private, and nonprofit sectors to develop policy recommendations for platforms that reinforce the importance of free speech for democratic, open, and inclusive societies. Their work should be guided by four shared principles that transcend the partisan warfare that falsely and destructively pits values of free speech and civil rights against one another.

Advertisement
Advertisement

1. Free speech and civil rights are interdependent. Billions of people around the world depend upon Facebook for information and civic discourse—yet the platform is shaped by a business model based on targeted advertising, which prioritizes content that is most likely to go viral and thereby maximize engagement. Often the content most likely to go viral is that which arouses strong emotions and tends to be extreme, especially when it emanates from influential people. Facebook CEO Mark Zuckerberg has invoked his commitment to free speech to explain why he allows prominent politicians to violate the platform’s rules against certain types of disinformation and even incitement to violence. But free speech is not served when some users are more “equal” than others—to borrow from Orwell’s Animal Farm.

Advertisement

In her final report from a two-year civil rights audit of Facebook published in July 2020, civil rights expert Laura W. Murphy slammed Facebook’s top management for making no credible efforts to seek out civil rights expertise or to understand the company’s impact on civil rights, with “devastating” consequences. When a social media platform enforces its rules differently for different types of people, it ends up discriminating against society’s least powerful, who must abide by community guidelines or risk having their postings deleted or accounts deactivated. Public figures deemed “newsworthy” enjoy greater freedom—and, therefore, greater power. Moreover, people with marketing and advertising budgets can take advantage of platforms’ targeted advertising business model to “boost” and promote messages so that they can reach bigger audiences. When political debates, campaigns, and social activism rely so heavily on such a skewed and unequal information environment, democracy suffers.

Advertisement
Advertisement

Online harassment is another example of how free speech and civil rights are inextricably linked. Women, ethnic minorities, and LGBTQ people who are targeted by physical threats to themselves, their families, or their homes are effectively forced to risk their safety and emotional well-being in order to exercise their right to free speech online. Harassment of journalists and writers (sometimes encouraged by public figures) has gotten so bad that PEN America created a field manual for individuals and media organizations to fight back. Recognizing this civil rights challenge for the digital age, Biden has already pledged to convene a National Task Force on Online Harassment and Abuse as part of a plan to counter violence against women. His campaign website says the task force will be asked to develop “cutting-edge strategies and recommendations” for government policy as well as actions that can be taken by “private entities” from companies to schools and nonprofits.

Advertisement

Some right-wing media outlets have already fingered the proposed task force as evidence of the incoming administration’s censorious intentions. Such accusations can be neutralized if the more narrowly focused anti-harassment task force is folded into the broader bipartisan commission I propose. The commission’s mandate should address all types of speech, from harassment to disinformation and violent extremism, that can threaten people’s safety, health, or ability to exercise fundamental rights including the right to vote. The commission should be led by people from across the political spectrum who share a commitment not just to free speech but to all civil rights enshrined in the Constitution as well as universal human rights standards.

Advertisement

The commission will need to work pragmatically in light of a harsh reality. As partisan posturing and mudslinging at the most recent Senate Big Tech hearing made abundantly clear, this country lacks any semblance of consensus about the definitions or perpetrators of “disinformation” or “extremism.” Trump is not the only politician to have seized upon the idea of revoking internet platforms’ immunity as a political cudgel to achieve partisan objectives. For this reason, until progress can be made in advancing policies around which there is stronger bipartisan consensus, the new administration should avoid wading into the political black hole of Section 230 reform. Instead, the commission can help build trust by focusing on measures that strengthen transparency and accountability for government actors as well as for companies.

Advertisement

Anybody with power to shape or track the flow of digital information must be subject to appropriate oversight so that when our rights are violated, we know who is responsible and can hold them accountable using the law and the courts, our choices as consumers and investors, or our votes. Research has shown that when writers believe that their online activities are being tracked without sufficient safeguards, they censor themselves. Activism, investigative journalism, and political dissent suffer as a result. For that reason among others, surveillance reform and passage of a strong federal privacy law should be vital components of the next administration’s free speech agenda, demonstrating a strong commitment to protect the rights of all Americans.

Advertisement

2. Transparency is the first step toward accountability. We are citizens of a democracy, and our freedom of expression and opinion is threatened when we are unwittingly manipulated through unaccountable and untransparent algorithms. Targeted advertising mechanisms enable paying customers to target us with messages tailored to provoke us, based on detailed digital dossiers that social media companies now compile about us. Digital platforms’ targeted advertising business models must be subject to strong requirements for transparency, accountability, and consumer choice.

Advertisement

Such requirements are essential so that people can understand—and also control—how certain content is promoted to or targeted at them. Alongside federal privacy regulation, disclosure rules about how content is policed, curated, amplified, and targeted would strengthen the power of American consumers and investors to hold companies accountable if they fail to mitigate and prevent harms. Transparency rules would also strengthen cross-border trust necessary for American businesses that offer digital products and services to maintain their appeal to global consumers and be welcomed by regulators around the world. My colleagues at New America’s Ranking Digital Rights program have published extensive recommendations for specific types of disclosures companies should be required to make. (New America is a partner with Slate and Arizona State University in Future Tense.) Such disclosures will enable companies to demonstrate respect for the human rights of all users regardless of nationality or citizenship.

3. Human rights standards are a tool for corporate accountability. The First Amendment applies to government generally and Congress specifically, forbidding it from restricting anyone’s freedom of speech or freedom of the press. While conservative politicians and some tech CEOs like Zuckerberg invoke the First Amendment in relation to users’ speech on social media, it does not actually apply to speech hosted or transmitted by private companies. International human rights standards, on the other hand, do apply to companies.

Human rights standards offer tools to hold companies accountable for harms caused by online speech without needing to change the scope of platforms’ legal liability. The U.N. Guiding Principles on Business and Human Rights assign companies a responsibility to respect human rights, even while governments hold the primary duty (and power) to protect human rights. To meet their responsibility, companies not only need policies outlining their commitment to human rights, but they also need to disclose credible evidence that they conduct due diligence—often in the form of impact assessments—to determine how their business operations affect human rights.

Advertisement

Equipped with this information, tech companies can take actions to ensure that they do not cause or contribute to violations of their users’ human rights, including privacy and freedom of expression. Importantly, Article 19 of the Universal Declaration of Human Rights defines “the right to freedom of opinion and expression” to include the “freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Disinformation campaigns waged by politicians to achieve political outcomes are arguably a violation of people’s right to freedom of opinion without interference, especially when platforms are not transparent about how content is prioritized and amplified to users.

Advertisement
Advertisement

By setting the expectation that U.S. tech companies should make and meet commitments to respect the human rights of users (as well as their employees and contractors), the new administration would strengthen the case for regulators like the Federal Trade Commission and the Securities and Exchange Commission to require tech companies to disclose measures they are taking to identify and mitigate harms that their businesses may cause.

Advertisement

4. In the digital age, free speech depends on internet access. Even more fundamentally, without access to basic information and communication channels (like the internet), people cannot fully exercise their right to free speech. The incoming administration is rightly making universal broadband a priority for all kinds of reasons, including economic recovery and social justice. Building on that, the commission should make a case for how free speech is moot for people who do not have basic access to the channels and platforms where most public discourse occurs.

Finally, there is the issue of who controls the platforms most people use once they get connected. Free speech is not well protected or served when a small handful of companies control a few platforms that most Americans depend upon for information and political discourse. It is vital for regulators to foster more robust competition so that people have more choice across a much broader range of different types of platforms with different rules and business models. Only then can we keep the public discourse from being dominated and even manipulated by particular groups of people.

Advertisement

Antitrust enforcement is an essential first step toward improving those odds. So are policies and regulations that enable the emergence of nonprofit “digital public infrastructure” projects that can empower citizens to design and govern platforms for information sharing and discussion that are better suited for community discourse and problem-solving, rather than maximizing targeted advertising revenue. Regulatory changes requiring data portability and interoperability could support new types of businesses, like “middleware” companies recently proposed by Francis Fukuyama and other members of a working group convened by Stanford University: Such companies could operate decentralized interfaces between social media platforms and their users, enabling different communities to choose and design social media content rules that fit their own values, concerns, priorities, and objectives.

Advertisement

Good policy ideas will need public support across partisan divides in order to overcome the brutal partisanship that now dominates Washington policymaking. To that end, the new bipartisan commission on free speech and civil rights should be led by people with deep ties to a range of communities and constituencies across the country. Policies affecting online speech need to be based on a shared commitment and understanding that free speech and civil rights are interdependent and intertwined—in service of all Americans regardless of whether we supported the winner or loser in the last election.

Portions of this piece were adapted from a research paper titled “Reclaiming Free Speech for Democracy and Human Rights in a Digitally Networked World,” recently published by the University of California National Center for Free Speech and Civic Engagement, and a two-part report series titled “It’s the Business Model” published by New America’s Ranking Digital Rights.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement