If Justice Samuel Alito’s leaked opinion overturning Roe v. Wade accurately forecasts the Supreme Court’s final opinion, the next battles in abortion regulation will be fought in Congress, in state capitals, and online. The online fight will be brutal and fierce—and will test the ability of platforms and regulators to control harmful speech.
It is easy to imagine abortion opponents posting graphic photos of aborted fetuses, and pro-choice advocates sharing images of women and children harmed by lack of access to safe medical care. Protests will be documented by videos that go viral online. Both sides will make threats online. Online tensions are likely to boil over into offline violence.
Platforms will be caught in the middle, trying to mediate a culture war with immense consequences for human welfare, while making decisions that please almost no one and anger both Democrats and Republicans. Inevitably, their decisions will be inconsistent, their policies and reasoning will shift as new facts emerge, and they will make mistakes. Each controversial decision will be an opportunity for a news story, a congressional hearing, or a government investigation. The calls to regulate online content will grow louder and louder. But will they work?
At the federal level, it isn’t likely. All of the fierce debates on online content in recent years—the George Floyd protests, the 2020 election, and the Jan. 6 attack on the Capitol—have spurred calls for regulation of Big Tech. But despite repeated pleas to do something, Congress has stood flat footed. Democrats want to compel platforms to remove more content, while Republicans want to compel them to remove less. With opposing rationales, they have not been able to forge a path forward.
While Washington will almost inevitably remain gridlocked, reform at the state level is more likely. In a post-Roe world, regulating abortion will fall to the states—which are also likely to take the lead in creating rules around online content. Over the past few years, states exemplified their role as the “laboratories of democracy” on tech reform, passing laws in areas like privacy and tax.
But regulating online content presents unique challenges for states. The First Amendment narrows what states can do, since they can’t dictate speech restrictions for private actors like online platforms. They’re also limited because federal law preempts state law, with Section 230 limiting platform liability for content they host. As a result, state efforts to regulate online content have faced legal challenges and been blocked by courts before they ever take effect.
Faced with these challenges, what can state policymakers do to respond to the coming online content war?
In a recent paper, we offered a menu of options for state policymakers seeking to regulate online content.
We start from the proposition that proposals will be more likely to succeed if they take the concerns from both sides of the aisle seriously. For that reason, our recommendations broadly tackle issues raised by both Democrats and Republicans, addressing both harmful content and the costs of errant removal.
First, to better address upcoming problems with online content, we need to understand the scope of those problems. By funding work by in-state researchers and providing platforms liability protections for sharing research data, states can help us understand better the debates, types of content, and controversies that will populate online platforms in the wake of Roe’s reversal. Amid fractious online discussion, it is essential that researchers and policymakers understand not only how the problems of online content and content moderation are evolving, but also how those problems are resulting in harms across online and offline communities.
Second, platforms will likely be forced to make difficult moderation decisions about graphic or extreme content. For instance, how should platforms handle live videos of abortions performed outside of the health care system? These decisions will require judgment calls about how best to balance community safety, free expression, and the social and political value of graphic content. Platforms should develop clear policies on these issues and enforce them consistently.
States can play an important role holding platforms responsible for their content policies and moderation decisions. Existing consumer protection laws already provide states the ability to prosecute platforms that break the promises they make to their users. Whatever policy a platform adopts on whether to permit or remove graphic videos of abortion procedures, harmed women, or aborted fetuses, it should be held accountable if it violates that commitment. By prosecuting systemic and egregious violations of terms of service, states can help ensure that users understand and trust the commitments platforms make to them.
Finally, anticipating a radical shock to online discussion, states can work to ensure the resiliency of our communication systems by supporting the institutions and infrastructures that provide important information and resources to citizens. States could endow new multistakeholder commissions that draw from industry, academia, civil society, and government to consider the problems of online content and content moderation in their state. States can also shore up the communication offices and practices of government institutions to ensure that they fill information holes on abortion regulation with reliable information. Finally, states can better support local news outlets, providing people with valuable reporting and analysis on abortion-related issues in their communities.
None of these policy options will solve the online content challenges likely to follow from a reversal of Roe. However, as states increasingly play a larger role in determining abortion rights, they also have an opportunity to support healthy online content.