That flapping noise you might have heard earlier this week was the sound of YouTube winging it.
On Tuesday night, the video platform said it would not act against one of its users, right-wing talk show host Steven Crowder, in response to the complaints of another, Vox journalist Carlos Maza. Maza had tweeted a supercut in which Crowder, who has 3.8 million subscribers, repeatedly insulted Maza using homophobic and racist language. Maza also showed that many of Crowder’s fans had subsequently harassed him on social media. After several days, YouTube finally responded, saying it had conducted a thorough review of the videos in question and determined they had not violated its policy against hate speech.
Then on Wednesday morning, YouTube announced that as part of a stricter policy against hate speech, it would remove thousands of videos that promote Nazism, white supremacy, and other hateful ideologies that endorse the thinking that one group of people is superior to another. Then, that afternoon, the platform decided to strip Crowder’s YouTube channel of the ability to make money off ads. First, responding to Maza, it said it was doing so because of a homophobic T-shirt Crowder was selling through the shop on his website; then it said the demonetizing was because of other violations of YouTube’s policies on his channel. But YouTube never said how, and with what content, Crowder broke its rules. Instead, the company seemed to just keep making decisions, letting us know each time it changed its mind.
This incident revealed a lot of things to different people. Conservatives from Ben Shapiro to Sen. Ted Cruz saw more evidence of a Silicon Valley giant curbing conservative speech to mollify its more liberal users. Supporters of Maza railed against YouTube’s indecision amid a clear case of hateful language and harassment. Explaining how YouTube once again was struggling to interpret its own rules, tech critics offered suggestions for better content-moderation policies that it and other platforms might adopt. And it all took place in the shadow of what looks to be an increasingly serious effort by U.S. regulators and lawmakers to crack down on companies like YouTube’s parent, Google, which they believe may have used anti-competitive means to achieve such massive scale.
What almost everyone would probably agree on is this: Social media platforms like YouTube are a mess, and every time they try to clean up their mess, they fail in one way or another. It may well be time for the government to do something, but many people assume that any resulting action would be a blow to free speech: Someone is bound to be silenced by overreaching feds.
But the history of how the U.S. government has regulated mass communication offers another way to understand the issue. Social networks are venues for small-scale communication, but they also serve as 10,000-foot-high podiums where people can broadcast ideas and reach millions in ways never before technologically possible. They’ve consolidated audiences, dominated users’ time, and come to play a central role in the flow of political information. Once we start to think about the technology like that, social media platforms look a lot more like communications infrastructure, like radio and television, than they do like a venue for passing notes or publishing casual missives.
For decades, radio and television followed regulations—hardly heavy-handed ones—meant to ensure they served the information needs of their audiences and did not actively harm political discourse. The public may not own the internet the way it does the airwaves, but they’re not completely dissimilar. The internet is a resource that was built by government researchers. Thinking about the largest internet platforms as a kind of infrastructure is a useful place to start considering what light-touch regulation over their broadcasting functions might look like. Social media platforms impact the public interest. And so they should serve it.
It’s easy to recognize the problems with the current nonapproach. YouTube’s continued fumbling is scary because it is one of the most important information sources in the world. It’s the second-largest social media network and the second-most-popular search engine on the internet. Social media is one of the key ways Americans access their news—two-thirds of adults in the U.S. use social media to get news. Across the world, people watch more than 1 billion hours of YouTube a day. Like radio or television, social media plays a central role in providing people with the information they need to participate meaningfully in political life.
That the content generally comes from users without centralized editorial oversight is one big difference between social and traditional media. Another key difference between tech platforms and broadcast is that many of the creators with gigantic audiences on YouTube and Facebook peddle in hate speech and dangerous misinformation. It’d be much, much harder for Alex Jones or Laura Loomer or a Russian agent pretending to be an American racist to get a national radio or television show and reach the same audience. So it makes sense that these platforms are so appealing to bigots, conspiracy theorists, and trolls who have used their megaphones to spread misinformation and hatred. Before social media, where could they find an audience so large? In response, Congress has regularly held hearings over the past two years concerned with the negative externalities of social media, bringing in everyone from CEOs like Mark Zuckerberg and Sundar Pichai to right-wing pseudo-entertainers like Diamond and Silk.
At these hearings, lawmakers have threatened to regulate the industry, which has inspired the platforms to try to self-regulate, though neither thing is actually happening. “I don’t want to vote to have to regulate Facebook, but by God I will,” said Sen. John Kennedy, a Louisiana Republican, last year. Just this Monday, Speaker of the House Nancy Pelosi tweeted, “The era of self-regulation is over.” But no one is pointing to a specific piece of legislation or even policy idea (other than Sen Elizabeth Warren, who wants to break up Facebook). On Wednesday, YouTube said it had enacted 30 new user policies in 2018, many of which focused on removing or limiting the spread of hate speech, and yet the site remains a bigot-infested wasteland. In part, that’s because every time YouTube makes a rule, its most odious users figure out how to walk right up to whatever line they now can’t cross.
Lawmakers don’t seem to have a clear sense of what to do here. That’s understandable. Regulating what a company can and cannot host on its site is uncomfortable for a reason: Freedom of speech is really, really important. People who use these platforms to air their hate speech know this, crowing “censorship” every time a company takes some action to limit the spread of bigotry.
One problem with this logic is that private companies can do whatever they want—the First Amendment doesn’t apply to what Facebook allows and does not allow. But if we’re uncomfortable with giving Facebook and YouTube all of that power over how citizens debate, it isn’t crazy to insist that the government can play a carefully defined role. It wouldn’t mean that the feds would decide what is and is not acceptable speech on the platforms. But it could mean rules against broadcasting hateful views or disinformation to large audiences. Freedom of speech isn’t the same thing as the freedom to broadcast that speech.
That’s a point lawmakers in the early days of broadcast seemed to understand quite well. The 1934 Communications Act, which created the Federal Communications Commission, mandates that broadcast license holders operate in the “public interest, convenience and necessity” of the communities that they serve. This public interest standard has proved hard to define over the decades. Regulators, industry lobbyists, advocates, and communications scholars have written thousands of pages debating its contours. Before Ronald Reagan became president and took an ax to so many federal regulations over American industries, for decades the FCC required that any entities that were granted a license to broadcast followed certain light-touch rules to ensure they were operating in the interest of their audiences.
These included rules like the Fairness Doctrine, which required that broadcasters dedicate at least some time to covering politically important and controversial issues and that broadcasters do their best to make sure that representatives of various views were able to communicate their positions. Broadcasters were required to devote a certain amount of time to public affairs coverage and had to reduce excessive advertising. There was also a rule in the 1970s that required those with a broadcast license to conduct annual interviews with community leaders, like local clergy, heads of industry, union leaders, and community advocates, and send out random surveys to listeners to ensure the diverse communities they broadcast to were being served by their programming. The public interest obligation has also been interpreted to require a diversity of ownership of broadcast stations in order to maintain a diversity of viewpoints, which protected against a single company controlling all the radio and television stations in a single market.
In the early 20th century, the U.S. opted to prioritize private access to public airwaves rather than go with a model that prioritizes public control of the airwaves, like the BBC. Since broadcasters are given a license to use public airwaves, essentially allowing free monopoly rights to a public resource off which they could profit, the logic was to give Americans something back in return. “The idea of the public interest at least implicitly signaled that there is always a kind of market failure—that the market would not entirely take care of our media system, especially in regards to democratic obligations, so there has to be this special category of the public interest,” says Victor Pickard, a professor of media studies at the University of Pennsylvania and the author of America’s Battle for Media Democracy. There were two main arguments for placing public interest obligations on broadcasters. One was that the airwaves are scarce. Before the transition to digital broadcasting, there were only so many stations that could operate in one area without causing signal interference. Since the public resource was limited, if you were lucky enough to get a license to use the airwaves, you had to follow certain rules.
These rules were made by politicians who admitted they didn’t understand the technology they were regulating. (Sound familiar?) As Sen. Key Pittman of Nevada put it in 1926, referring to early calls to help police in an increasingly crowded radio dial: “I do not think, sir, that in the 14 years I have been here there has ever been a question before the Senate that in the very nature of the thing Senators can know so little about as this subject.” But that didn’t prevent legislators from forming a federal agency—the Federal Radio Commission—to figure something out. Today, arguments that lawmakers are too behind the times to regulate these companies abound, but technological literacy is far less important than understanding the importance of protecting our information systems to keep democracy running smoothly.
The current public interest requirements of broadcasters are nearly unrecognizable compared with the form they took before Reagan-era deregulation—much of the license-renewal process has been reduced to answering a few questions on an online form. Regulating social media platforms in the exact same way broadcasters were regulated decades ago doesn’t make sense. But the old debates over broadcast could help guide lawmakers grappling with how to rein in the new communications giants of today. They need to acknowledge that the core communications infrastructure that Americans rely on to stay informed should have guardrails to ensure it’s operating, at least at some level, in the interest of the public. In today’s terms, that might mean actual rules against broadcasting hate speech that is sure to reach a large audience—like a channel that has more than a certain large number of subscribers. (If they have fewer, and a platform isn’t boosting them algorithmically, they become less critical to address.) It could mean an obligation to regularly report on efforts to purge viral misinformation or regular reporting on removing foreign and domestic actors who create fake accounts intended to meddle in electoral politics. It could mean clear requirements for handling user data responsibly or not allowing ads that lead to discrimination, like in housing and employment.
Communications infrastructure, particularly when that technology has a broadcast function, is powerful. People are able to use it to misinform and spread hate to an audience in the millions every day. People also rely on social media to get information to make decisions about where they’ll send their kids to school, their health care, and whom they’ll vote for. Politicians who are thinking now about what to do about the mess that social media has become might find inspiration in policies that guided broadcast technology for decades—policies rooted in an understanding that corporations that want to make as much money as possible will always prioritize profits above all else. Protecting the safety of their users will always come second. That’s where laws are supposed to come in. It’s time we got some.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.