When Kanye West made antisemitic comments on Instagram and Twitter earlier this month, Meta and Twitter responded by locking his account, reasoning he violated their community guidelines.
Crucially, the decision to freeze his account—temporarily halting his expression in those spaces—was made independently by the companies. No government actor was involved. The First Amendment generally precludes the U.S. government from limiting private speakers’ and companies’ ideas or controlling how social media firms govern their spaces.
But other governments may soon fill that void, and regulate how American tech giants referee speech on their platforms. Earlier this month, the European Union approved legislation aimed at regulating social media platforms: the Digital Services Act. The law will take effect in 2024, in time for the next U.S. presidential elections, and promises big shifts in how online speech is refereed not just in Europe, but also here at home. The law, among other requirements, places substantial content moderation expectations on large social media firms—many based in the U.S.—which include limiting false information, hate speech, and extremism.
It’s not clear how social media firms will adapt to the law, but the fines they will face for failing to comply will be massive. Firms can be fined up to six percent of their annual revenue—that’s $11 billion for Google and $7 billion for Meta. Essentially, the EU has created a significant new legal incentive for firms to regulate expression on their platforms.
The law, while written to protect EU residents, will almost certainly lead social media firms to change their moderation policies worldwide. Thus, with the DSA, the EU will effectively be doing what the First Amendment ostensibly prohibits our own government from doing: regulating the editorial judgments made by social media platforms on which Americans communicate with each other.
This isn’t the first time an EU law has changed Americans’ rights online. When was the last time you were asked about cookies when you visited a website or given the option to limit a tech firm’s access to your private data? Most of those changes stemmed from the EU’s General Data Protection Act, which went into effect in 2018.
Large social media firms will increasingly contend the DSA, rather than independent, in-house policies, requires they remove certain content. That means an American politician’s conspiracy-filled Facebook post will create legal liability for Meta. The company might then take it down to avoid huge fines in Europe. Similarly, a YouTube video posted by Christian nationalists or COVID misinformation shared on TikTok could be taken down because of DSA concerns.
If American lawmakers had rolled up their sleeves and done the difficult work of identifying a narrow path between protecting online spaces and safeguarding First Amendment rights, we wouldn’t be left to speculate about the impact a law written by and for those on another continent will have on our own marketplace of ideas.
Under the DSA, the spectrum of ideas that flow in online spaces, for better or worse, will narrow. The threats of massive fines create substantial incentives for firms to, out of caution, remove content and speakers that could lead to EU penalties. Experience shows these types of laws lead to over-corrections, encouraging taking down content and creating almost no incentives for fostering an expansive space for the exchange of ideas.
The scenario becomes more complex if the U.S. Supreme Court upholds Texas and Florida laws that require social media firms to leave content and speakers on their spaces, even if doing so violates their community guidelines. Presumably under that scenario, some online platforms would have to choose between the regulations of those two U.S. states and the European Union, accelerating the global internet’s balkanization. Recent years have seen the development of a separate Chinese internet, and a separate Russian internet, and the danger of the EU’s DSA, especially if it triggers an American backlash, is that it could drive a wedge between the free North American and European Union internets.
Justices have not yet agreed to hear the content moderation cases emanating from Florida and Texas, but are expected to. Unlike the DSA, the two state laws cannot circumvent U.S. law, and according to nearly a century of precedent, the state laws are unconstitutional. After all, the First Amendment generally precludes the U.S. government from limiting private speakers’ and companies’ ideas or controlling how social media firms govern their spaces.
Back across the Atlantic, the tech firms can challenge the DSA in European courts, before or after it goes into effect, something EU officials have said they expect. However, the law, while it will require big firms to rebuild their moderation systems in a short period of time, could also help the companies by outsourcing any controversial content moderation decisions.
Firms such as Meta, Google, and Twitter have been widely criticized by those who feel they don’t do enough to remove dangerous, false, and extremist content, and those who feel they go too far in doing so. Companies naturally resist regulation, but by imposing standards across all major platforms, the EU could be shielding these private companies from the fury of those who would oppose their moderation policies.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.