Future Tense

The Latest Tech Hearing Is About Helping Trump on Election Day

Senate Republicans don’t want answers from the CEOs of Facebook, Google, and Twitter. They want to cow them.

TOPSHOT - Facebook CEO Mark Zuckerberg testifies before the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law on "Online Platforms and Market Power" in the Rayburn House office Building on Capitol Hill in Washington, DC on July 29, 2020. (Photo by Graeme JENNINGS / POOL / AFP) (Photo by GRAEME JENNINGS/POOL/AFP via Getty Images)
The last time Mark Zuckerberg “appeared” before Congress. GRAEME JENNINGS/Getty Images

On Wednesday morning, less than a week before Election Day, the CEOs of Facebook, Google, and Twitter will appear at a U.S. Senate hearing to discuss their companies’ content-moderation practices, but their true role will be as unwilling participants in speech theater. The Commerce, Science and Transportation Committee event, which is dedicated to exposing “online platforms censoring conservative speech,” is theater because no such censorship exists. Worse, it will discourage tech companies from combating disinformation at a mission-critical time, when our democracy is on the line. It is primed to contribute to the chaos and confusion that President Trump has been stoking to a boiling point.


First, the evidence is unequivocal: The real threat to American democracy is not “censorship” of conservative perspectives on social networks, but coordinated disinformation campaigns, both domestic and foreign, that sow division, confusion, and distrust.


Consider the long-standing Russian efforts to suppress the Black vote. In 2016, fake accounts built a significant following among Black users by focusing on racial inequality. On Election Day, the operators bought ads targeted at Black audiences encouraging them not to vote.
The pages weren’t actually run by Black activists, but by the Russian Internet Research Agency, which was trying to help elect Donald Trump. Although Black residents make up only 13 percent of the U.S. population, they accounted for 38 percent of the Russians’ U.S. Facebook ad buys and 50 percent of the user clicks. While we cannot know the ads’ actual impact, the 2016 election marked the most significant decline in Black voter turnout in modern history.


These efforts are continuing. In March, Facebook and Twitter removed a network of Russian-backed accounts originating in Ghana and Nigeria targeting Black communities in the U.S. Just last week, intelligence officials reported that Russia and Iran stole U.S. voter registration data to interfere in the election, and that Iran sent personalized emails threatening recipients to “Vote for Trump or else!”

The Senate hearing is the latest in a series of threats by Trump and his allies to expose social media companies to legal liability for removing or flagging disinformation.

On May 26, Trump wrote a tweet claiming that mail-in ballots are fraudulent. Twitter appended a link to the president’s tweet, which led to reports that such fraud is rare. Two days later—saying his “speech” had been “censored”—Trump issued a retaliatory executive order threatening to limit the power of social media companies to remove disinformation.


In addition to absurdly equating counterspeech with censorship, the president and his allies are engaging in a misguided assault on a 1996 federal law that aims to incentivize responsible content moderation. Section 230 of the Communications Decency Act gives social media
companies the power to filter, block, and remove information that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” without risk of legal liability. Congress knew that federal agencies didn’t have the resources to tackle all online dreck, so it secured a legal shield for online platforms to do the work. The Trump administration proposal would eliminate the power of platforms to remove content that is “otherwise objectionable”—like disinformation that discourages voter participation through deception.


If the Trump proposal were law, tech companies could risk liability for removing ads targeted at Black users that spread lies. Twitter likely would do nothing in the face of destructive falsehoods like “you cannot vote if someone in your household has committed a crime.”


The Senate hearing and the Trump proposal are obvious attempts to suppress private speech. The First Amendment stands as a check against government censorship. It doesn’t restrict private entities, which themselves have free speech rights. As nonstate actors, tech companies have the freedom and crucially the power to respond to disinformation resulting in voter suppression.

Governmental threats that stop tech companies from combating disinformation are far more dangerous to our constitutional values than social-media companies engaging in content moderation. The Senate hearing six days before Election Day is an obvious play to chill
social-media companies’ efforts to remove election disinformation. The goal is to ensure that tech companies won’t remove or respond to disinformation that amplifies division, suppresses votes, or sows confusion as we wait for election results. Punishing companies by threatening to subject them to legal liability for removing disinformation is an unacceptable attack on our democracy.


Companies are making attempts to avoid repeating 2016, like Facebook and Google refusing to accept political ads after Election Day and Twitter banning all political ads. That said, the effort to chill platforms’ removal of disinformation is real.

To be sure, Facebook, Google, and Twitter need to do a better job of protecting democracy. They need to consistently enforce their policies (including against politicians) and to provide more transparency about the enforcement and effectiveness of their policies and about coordinated disinformation campaigns. But those challenges should be addressed in a serious and comprehensive manner in 2021—not in a politically charged hearing less than a week before Election Day.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.