On Tuesday, the House held its first hearing for the committee investigating the Jan. 6 attack on the U.S. Capitol Building—an attack that was livestreamed.
As the country scrambled to understand what was unfolding at the Capitol, those storming the building uploaded selfies, live video, and status updates in real time to a handful of popular social media platforms—including the relatively new, but rapidly growing, platform called Parler.
The rally and march that led to crowds fighting Capitol police, scaling the Capitol walls, and ultimately breaking into the halls of Congress were the culmination of a months-long #StopTheSteal movement that unfolded across the internet—from social media platforms to event planning and ticketing websites, crowdfunding campaigns, and independent online message boards. And though it was clear to many journalists, researchers, and community organizers that major platforms like Facebook and Twitter were early social media hubs for #StopTheSteal content and organizing, the sheer volume of evidence uploaded to Parler on Jan. 6 turned a spotlight on the lesser-known platform in the hours and days immediately following the attack. Within 48 hours, the Parler app had been suspended by both the Google Play Store and the Apple App Store. It wasn’t long before Amazon Web Services, Parler’s web host, followed suit, forcing the platform’s operational back-end offline and rendering it inaccessible to users.
Large social media platforms have faced mounting pressure regarding their content moderation policies and practices from users, activists, and, increasingly, members of Congress. In 2020, an onslaught of election- and pandemic-related disinformation supercharged calls for more transparency and greater accountability from popular platforms like Facebook, Twitter, and YouTube. But in debates on internet content moderation, policymakers continue to exclude, or at least relegate to the margins, the rest of the online ecosystem—from payment processors to website hosts to mobile app stores—involved in content creation, storage, and dissemination. Parler’s brief exile brings into sharp focus how companies throughout the internet ecosystem not only can, but frequently do, make critical content moderation decisions—often risking collateral and disproportionate impacts while simultaneously failing to meaningfully address the spread of harmful content online.
Under the hood of every website, every online article, every viral video is an entire ecosystem of companies and organizations that make it possible for us to access that piece of content through an Internet-connected device. As we detail in a new report, authored for the Tech, Law, & Security Program at the Washington College of Law, these actors all play necessary functional roles in the distribution of content online. Their functions can overlap, with some companies falling into multiple functional categories, and certain functions are frequently bundled together and sold as a single service. Some of these actors are household names, like Amazon and Google. Many others are not. Yet without them, the Internet as we know it would not function.
There are companies that provide access—including internet service providers like AT&T and Comcast and virtual private networks—linking devices to the online world. Other actors route internet traffic from users to sought-after content once those connections are made; this includes registries and registrars that operate the internet’s Domain Name System (something akin to a phone book), as well as content delivery networks like Cloudflare and Akamai that allow websites and platforms to operate at scale as their user bases grow. Web hosts and content delivery networks provide a place for content to securely sit. Meanwhile, many other companies provide or enable such functions as browsing, financial facilitation (think PayPal or Alipay), and search (like Google or DuckDuckGo).
These categories matter because many of these actors are already involved in online content moderation, even if their histories of doing so are only sporadically publicized. On Aug. 3, 2019, a gunman stepped into a Walmart in El Paso, Texas, and opened fire, killing 22 people and injuring two dozen more. The suspect was soon linked to a “lengthy and hateful diatribe” posted on the notorious message board 8chan, prompting one of 8chan’s key service providers, Cloudflare, to essentially boot it from the internet. Just months earlier, in the wake of the Christchurch terrorist attack in New Zealand, internet service providers in Australia and New Zealand had temporarily blocked access to 8chan countrywide to prevent viewing of the shooter’s live-streamed attack video (though larger platforms like Facebook were allowed to remain online despite hosting the initial livestream—and hundreds of thousands of copies).
Paypal froze Wikileaks-affiliated accounts in 2010; self-styled men’s rights activists have leaned on Paypal to freeze the accounts of sex workers and other adult content creators, who also face persistent censorship on social platforms. These are hardly the only money-related content moderation incidents: In April 2020, registries and registrars worked with the U.S. Department of Justice to disrupt hundreds of COVID-19 scam websites, which tried to steal visitor’s money with false information; U.K. domain registry Nominet did the same of its own volition, screening COVID-19-related websites by default when registered with the company and refusing to service those it deemed illegitimate. In December, Mastercard and Visa cut payment services to Pornhub after a New York Times article alleged the website was hosting volumes of child sexual abuse material.
Internet service providers can block or throttle (slow down) access to particular websites; browsers can filter out certain kinds of content. Domain registrars can stop servicing particular domains, app stores can boot applications off their marketplaces, and hosting services can refuse to support particular websites or content or actors. All of these actors can—and do—exert a range of controls over online content creation, storage, and dissemination. But rarely do they have clear policies and frameworks in place for making these decisions, in large part because their ability to exert this influence is often left out of online content moderation discussions.
All told, Facebook, Twitter, and the large social media platforms have an outsized influence on the distribution and amplification of harmful content online—but they are not the whole internet. Shaping the content moderation conversation around the largest players in the ecosystem ignores the range of other—sometimes smaller, sometimes dominant—actors who also exercise levers of control over content availability online, whether they make the headlines or not. The continued failure to include these actors in conversations about when and how content can and should be regulated online leaves private companies that control large portions of the Internet in a position to continue making opaque, ad hoc decisions regarding what content should, or shouldn’t, remain online—decisions that typically leave users with limited recourse while also rendering efforts to effectively combat harmful content online incomplete by default.
Parler, meanwhile, has already found a new web host.
You can also find it in the Apple App Store.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.