We’ve been trying to make sense for more than a year of how the Russian government helped the Trump campaign during the 2016 election. Monday’s charges and revelations from the Mueller investigation began to clear one path of potential answers. Tuesday and Wednesday’s Capital Hill hearings featuring representatives of Facebook, Twitter, and Google might better illuminate another.
While the threat of Russian interference in the 2016 election started making headlines back in June of last year, when the Democratic National Committee found that a hacker operating under the name Guccifer 2.0 had infiltrated the DNC’s servers, it wasn’t until this September that Facebook admitted it had been gamed by the Russians, too. Soon after, Twitter and Google both also disclosed that Kremlin-backed groups appeared to have used their social media–sharing and ad-targeting tools in an attempt to sway the election and help secure Donald Trump’s victory. Often, these efforts pushed far-right messages in line with the Trump campaign’s message; in other cases they attempted to sow discord and confusion by peddling false stories and pushing the buttons of movements on the left, like Black Lives Matter. We’ve learned how these prongs of the Russian disinformation campaign worked in dribs and drabs. Congress, too, has investigated how the Kremlin may have meddled in our election, and this week it’ll put the largely unregulated platforms that unwittingly helped those efforts in the hot seat.
First up is a hearing on Tuesday, when staff from Facebook, Twitter, and Google will go before the Senate Judiciary Committee’s Crime and Terrorism Subcommittee, led by Sen. Lindsey Graham. There, Facebook is expected to tell Congress that a Kremlin-affiliated group’s ads were served to 29 million people and that after those posts were liked or shared, the reach of that content stretched to approximately 126 million people over the past two years, according to multiple reports on Facebook’s prepared remarks. Earlier, Facebook had estimated that the Kremlin-backed ads reached about 10 million users—a lowball figure, apparently.
Then on Wednesday, general counsel from all three companies will testify at hearings before the House and Senate intelligence committees on how exactly a foreign government was allowed to buy political ads targeting Americans before our election. So far there’s evidence that Russian-linked operatives used Facebook, Twitter, and Google to meddle in the U.S. election in almost every imaginable way, including registering fake accounts and faux organizations masquerading as concerned voters and activist groups, flooding Twitter with militias of bots, organizing bogus grassroots events, and plastering social networks with hateful, divisive memes, fake news, and viral posts in order to manipulate American voters in the run-up to Election Day. Expect questions on Tuesday and Wednesday from unhappy politicians wanting to know what exactly these companies were aware of and why they didn’t do more to stop it.
Google finally offered details about what it plans to share with Congress this week. According to a blog post published late Monday, the company found 18 channels on its YouTube platform with connections to a known Kremlin-backed content operation, which uploaded 1,108 videos with a total of 43 hours of content posted to YouTube between June 2015 to November 2016, right before the election. Those videos apparently notched 309,000 views. Google also says it found evidence of $4,700 spent by a Russian propaganda group on search and display ads. Google also maintains that its Google Plus social network (which, yes, is still a thing) identified “no political posts in English from state-linked actors,” yet last month ThinkProgress reported that it found a Google Plus page for the group “Black Matters US,” which appeared to also be linked to Russian state actors. That Google Plus page was recently suspended, tweeted Casey Michel, the journalist who spotted the account. So Google might be lowballing its estimates here, too.
And Twitter will tell Congress that it has located 2,752 accounts that were run by Russian-government backed operatives and more than 36,000 bots that sent some 1.4 million tweets over the course of the election, according to the Washington Post. Last month, Twitter told Congress that it had only found 201 accounts associated with Russian-affiliated accounts Facebook had disclosed. That figure was criticized by Sen. Mark Warner, who charged that Twitter’s disclosures at that time were “inadequate” and “deeply disappointing,” since Twitter was only piggybacking on the analysis of its social media rival.
For obvious reasons, Silicon Valley is not enjoying this spotlight. Platform companies want to be seen as neutral playing fields, not participants in the body politic. Yet for much of the left, these internet companies may now represent the clearest point of blame yet for why Trump won—other than the fact that nearly half the electorate voted for the man. For the chunk of the right that always thinks someone is persecuting its beliefs, these firms have been objects of suspicion for some time now. Recall how much Facebook overreacted to its flimsy “Trending News” scandal, in which it was accused of suppressing news stories of interest to conservative readers. The Russia scandal is a migraine that is several orders of magnitude more blaring.
At stake: the overall freedom of some of the most valuable companies in the world. Pressure from the left and the right could lead to increased scrutiny and regulation from Washington, which is the last thing Silicon Valley wants. Then again: Like a kid who really hates lightning bugs, these companies are extremely good at snuffing out flickers of oversight. In the case of Google and Facebook, they would be remiss if they didn’t leverage their billions in profits to keep Washington out of their backyards—and they are.
There’s already a bill in the Senate, the Honest Ads Act, that would force social media companies to reveal who bought political ads on their platforms if more than $500 was spent, a description of what audiences were targeted by the ads, and how much was paid, lest they pay a fine. Broadcasters and publishers are already required to disclose this type of information and post disclaimers on ads to help voters know who exactly is trying to influence them.
Though the Honest Ads Act seems like a rather reasonable extension of existing campaign ad law, Google, Twitter, and Facebook aren’t rooting for it. Instead, the three companies are trying to show that they can regulate themselves just fine. In September, shortly after admitting that Russian operatives spent $100,000 on 3,000 political ads surrounding the 2016 election, Facebook outlined a number of new rules it was implementing that it hoped would prevent that kind of Russian interference from happening in the future. The reforms include a new tool that will show users all the ads that any one particular Facebook page has bought, no matter which users were targeted. And on Friday, Facebook said that it would extend its transparency rules beyond political ads to cover all ads served on the platform.
Last week, Twitter shared its plan to build a new Transparency Center, which would show who bought an ad, how long it’s been running, and whom it was targeted to. Political ads will also include a special marker on the tweet to help them stand out from ads that aren’t supportive of a particular candidate. Google hasn’t announced a new plan for dealing with political ads, but the search giant did demote RT—the Kremlin-funded news network that a January report from the director of national intelligence called out as part of “Russia’s state-run propaganda machine”—from its “preferred lineup” category, which reportedly guaranteed the channels revenue from advertisers.
But those proposed changes came only after members of Congress aired their dissatisfaction, and all three companies dragged their feet in accepting Congress’ invitation to testify. Facebook, Google, and Twitter might say that they’re ready to clean up their own mess, yet they’ve largely shown in recent weeks that they’re sore about being scolded and mostly just want to avoid new regulations.
And of course they do. Facebook made $85 million from advertising and promotion of the Trump campaign alone, according to Theresa Hong, one of the main brains behind the digital arm of his presidential bid, and that doesn’t include posts from super PACs or other shady political groups that were gunning for a Trump victory. Likewise, Facebook and Google both made millions off bigoted ad campaigns leading up to the election, including an offensive satirical travel ad about what Paris might look if it became the “Islamic State of France.” That ad was made by the organization Secure America Now, an advocacy group dedicated to inspiring fear and bigotry against Muslims and defeating Hilary Clinton. Facebook didn’t only take its ad dollars; the company also worked directly with the group to test a new video format and build a case study for the anti-Muslim ad campaign. Meanwhile, it took Twitter 11 months to shut down a Russian troll account pretending to represent the Tennessee Republican Party after it was flagged on three separate occasions by the state’s actual GOP. These issues were simply not a priority for these companies until pressure from Congress and scrutiny from the media made them one.
Google, Facebook, and Twitter were well aware that their tools were being used by domestic groups trying to stoke divisive fear and hatred in voters before the election. They either helped or looked the other way, and they certainly had the power to better monitor and flag when a non-American group used their sites to promote election-related content, too.
Still, as with most things that happen in Congress, don’t expect a preponderance of evidence to result in reasonable policy reforms. The Federal Election Commission has had plenty of opportunities to regulate online political ads, but there hasn’t been pressure from Congress to get it done. That’s probably in part because many members of Congress rely on data-driven ad targeting from all kinds of dark-money sources in order to boost their prospects of winning elections, too. And Facebook and Google have both lobbied the FEC over the years to exclude their online ads from traditional political ad disclosure rules.
None of this means this week’s hearings are sure to be useless. Congress could extract all kinds of details from Facebook, Twitter, and Google about how much they knew about Russia’s election meddling, which could work to bolster support of the Honest Ads Act. The public hearings could propel the companies to make more dramatic changes internally, too. Brutal congressional hearings in 2016, after all, preceded the resignation of the CEO of Wells Fargo over a fraud scandal; Sen. Elizabeth Warren even recommended as much.
But more than anything, we’re likely to learn at least a little more about the how corporations and the Russian government may have unwittingly conspired to helped Trump win the election. And knowing more about the conditions that got us in this mess could help to ensure history doesn’t repeat itself. Let’s just hope our elected representatives don’t throw softballs.