Regulation of social media is coming. But there is something strikingly missing from the American and European debates over the future of social media: the world beyond.
Facebook, YouTube, and Twitter are global platforms. Yet, Americans discuss platform regulation as if it’s only about American interests; the same is true in Europe, the leading force for online speech regulation in the democratic world today. I get that. That’s how legislators think, worldwide—how can I protect and promote my constituents’ interests?
But consider also what Colin Powell said about Iraq (and Pottery Barn): You break it, you own it. Here it should be: You built it, you own it.
American companies have come to dominate the globe, replacing public forums with corporate news feeds. They hunger for content—the more speech, the more engagement, the more advertising dollars—and then struggle to moderate unfathomable amounts of it.
They have provided platforms for public debate in places where the state has captured, owns, or represses print and broadcast media. But they are also platforms for advocacy of ethnic cleansing, livestreaming of violence, encouragement of attacks on refugees. Both the good and the bad of online media worldwide need to be considered in any regulatory scheme.
American companies have a responsibility to protect the people they have enticed to become their avid users. And in turn, U.S. and European regulation must get this right for them and their global users. Breaking up the dominant companies may be part of the answer, as Facebook co-founder Chris Hughes and many others argue. But even they acknowledge that competition policy alone is not the answer for the problems of online speech.
So what should that regulation look like? Companies and governments, thinking globally, must be guided by three basic principles:
First, ultimately public rules and public institutions—especially the courts in democratic societies—must play a central role in deciding what is and is not lawful. But that’s not how it’s working out. Governments flounder, politicize, and pontificate, often using the platforms to censor content, demanding takedowns that their own law would bar them from doing themselves. In Europe, public authorities demand that companies “eradicate” often vaguely defined online harms. They embrace a rhetoric of digital danger, feeding nicely into talking points for illiberal governments cracking down on online speech.
The experience of Germany is instructive. In 2015, as Germany began welcoming 1 million refugees, social media lit up with hatred and harassment. You are not welcome here, the far right bullied, their posts amplified and spread by both humans and bots. Some converted online threats to violence on the streets.
Heiko Maas, then Germany’s justice minister, was outraged, as much by the content as by social media’s failure to deal with it. After trying to get the companies to comply voluntarily with German law, and then feeling betrayed by company incrementalism, Maas conceived the Network Enforcement Act, or NetzDG. NetzDG, adopted by the German Bundestag in 2017, requires the largest companies to enforce provisions of German law that prohibit incitement to violence, Nazi symbols, and criminal insult, taking down content quickly when it is “manifestly illegal.”
NetzDG is a democratic state’s effort to reaffirm public norms over corporate dominance. But it is not courts or independent agencies deciding what is lawful. American companies do that. NetzDG effectively blesses private companies as judge, jury, and enforcer.
Maas’ model is spreading across Europe and around the world, reinforcing corporate control of online speech rules. A new proposal from France may put independent agencies and courts in a critical role of oversight. Such creative thinking is essential to ensure that our future public forums are not regulated by, as one European official put it to me, the “profit-making beasts” of social media.
Second, companies and governments are opaque and bureaucratic, creating a digital enforcement process that Kafka would have recognized. A strong regulatory framework would require that companies open up to public scrutiny their rule-making processes and their specific decisions, with appropriate privacy protections. Governments should require disclosures for better understanding of the artificial intelligence guiding company content moderation.
It’s not only about the companies. Governments must be transparent about their demands. Whether you are in Kashmir or Turkey or Egypt or other repressive environments, too often governments demand that companies take down content or ban accounts without public disclosure or reasoning. Companies, moreover, should not be expected to censor content in the absence of court orders subject to due process.
Third, we need global standards for global platforms, not the discretionary terms of service or some imagined version of the First Amendment. Human rights law provides the right set of guarantees for free expression, privacy, nondiscrimination, and due process.
Human rights standards protect all sorts of speech, but they do not require free-fire zones of disinformation, hatred, and harassment. They envision narrowly drawn restrictions when it is necessary to protect legitimate interests, such as the rights of others (to safety, security, privacy), or to safeguard public order. And they are binding on governments and intelligible to the public worldwide.
They would justify taking action against anti-vaccination sites that harm public health, white supremacists who incite harm of others, and terrorist groups like ISIS that use platforms to extend their violence. But they would also be strong justification for platforms to use against governments seeking to limit online speech.
None of this will work without public engagement and accountability. Criminal law, which governments like Singapore have adopted, is the wrong way to attack the problem of the platforms.
The companies must invest not just in thousands of content censors around the world but in engaging with communities worldwide, localizing their understanding of the role they play, giving communities ownership of online space. One idea, proposed by the global free speech organization Article 19, proposes nongovernmental “social media councils” to ensure public accountability for online speech rule-making.
These problems are global. They cannot be solved by Washington or European capitals—or by Silicon Valley—acting alone. The future of free speech depends on it.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.