Future Tense

Facebook and Google Are Too Big to Stop Election Meddling on Their Own

Senate Judiciary Committee member Sen. Al Franken  covers his face in frustration as he questions witnesses from Google, Facebook, and Twitter.

Chip Somodevilla/Getty Images

Three of the most recognized companies in the world—Google, Facebook, and Twitter—served as target practice Tuesday as executives submitted to two hours of questioning from a Senate subcommittee on terrorism about how exactly Kremlin-backed operatives used their platforms to hurt Hillary Clinton’s prospects of winning the presidency, spread disinformation, and stoke social unrest to American voters before and after the 2016 election.

But that was only the beginning. On Wednesday, general counsel from all three companies have two more hearings to sit through with the House and Senate intelligence committees.

Advertisement

At Tuesday’s hearing, senators came armed with display boards pasted with printouts of Russian-backed content that appeared engineered to rile the far edges of America’s deeply polarized electorate. And while many of the senators who submitted questions aren’t as fluent in social media as a YouTube star or a Twitter-addled journalist, the power of social media to influence, deceive, and manipulate voters wasn’t lost on anyone. Nor was the fact that these massive internet platforms have, in a sense, grown out of control.

Advertisement
Advertisement
Advertisement

“I’m trying to get us down from ‘la-la land,’ ” Republican Louisiana Sen. John Kennedy said. “The truth of the matter is you have 5 million advertisers. They change every month, every minute, probably every second—you don’t have the ability to know who every one of those advertisers is, do you?”

Advertisement
Advertisement

The answer from Facebook’s general counsel was a no. In other words, Facebook admitted that it is unable to know whom it’s doing business with, which is exactly how Russian-backed actors posing as faux activist groups and nonprofits were able to buy ads and create inauthentic pages on Facebook, Twitter, and Google. Russian agents were even able to organize over 60 real-life events from coast to coast across the United States before and after the election, at least 22 of which drew American attendees according to a Monday report in the Wall Street Journal.

Advertisement

The theme continued. Democratic Connecticut Sen. Richard Blumenthal showed a tweet of a doctored image of Aziz Ansari holding up a fake sign calling on voters to vote from home, a form of voter suppression.

Advertisement
Advertisement
Advertisement
Advertisement

“Do you know how many people voted in this way, thought they voted but in fact were fooled?” asked Blumenthal. Twitter’s representative at the hearing admitted that of course there’s no way for the company to know how many people were kept from voting as a result of seeing the false information. Instead, he noted that there were even more tweets that sprouted up contesting the false information. But that answer wasn’t satisfying. “I have 20, 30, 40 of them, so there may have been people discounting them. But at the same time they kept reappearing,” Blumenthal responded. Twitter might be trying to act fast to take down misleading content, but it seems there may just be too many tweets firing off, from real and automated bot accounts alike, for the company to keep pace with the deluge of misinformation Twitter hosts.

Advertisement
Advertisement
Advertisement

“How does Facebook, which prides itself on being able to process billions of data points and instantly transform them into personal connections for its users, somehow not make the connection that electoral ads, paid for in rubles, were coming from Russia?” Democratic Minnesota Sen. Al Franken pressed. Facebook tried to deflect, pointing to the fact that all kinds of different currencies are used to buy ads on their platform, but Franken wasn’t buying it. The fact that political ads targeted to Americans were bought with rubles should have been a huge red flag, Franken contended.

These companies might be too big to self-regulate. Look at how well Facebook is doing with sorting its fake news problem, which was identified as a serious issue even before the election. With 2 billion monthly users, Facebook says it sends hundreds of potentially fake news stories to the fact-checking organizations it partners with daily. But, according to a report Monday from Bloomberg, the outside fact checkers, most of which are tiny organizations compared with Facebook, are only able to sniff out a small handful of stories a day. One of the organizations has the goal of sussing out five fake news stories on Facebook a day. Another said it aims to debunk 10 a week. And with hundreds of potentially fake stories being flagged daily, clearly the current system doesn’t scale.

Advertisement
Advertisement
Advertisement

This isn’t just an issue about Russia and the election. Following a deadly shooting in Las Vegas in October, when people went to Google to learn about a suspect who was falsely identified as being the shooter, Google surfaced threads from 4chan, a notoriously shady anonymous message board chock-full of racist and sexist memes. Posters on 4chan were trying to identify the perpetrator, and surprise, they got it wrong. But that didn’t stop Google from pulling from the untrustworthy site into its Top Stories bar on the top of its search results, which Google blamed on its algorithm.

The Google example, and many of the other concerns senators aired Tuesday, raises the question: Would these companies have the same issues if they just hired enough people to sort these problems out? It’s shocking that Google doesn’t seem to have a team of humans, for when an emergency breaks, to quickly vet what does and does not surface as a top result in its search and news pages. Facebook, in an effort to eschew responsibility for policing content that’s posted on its site, looks to much smaller outside fact-checking organizations. Facebook can’t even tell who is buying ads on its site, apparently. And Twitter can’t keep up with all the harmful disinformation posted by bots and real people alike.

Advertisement
Advertisement
Advertisement

Proposed legislation, the Honest Ads Act, will, if passed, put restrictions on political advertising posted to these online platforms. But even that won’t solve one of the underlying problems that opened the floodgates for Russian manipulation of the election in the first place: These internet companies have become too big to control themselves. It’s not clear that any amount of regulation, self-imposed or otherwise, will be able to rein them in, which is why so many lawmakers are wondering if it’s time to break them up.

Read more in Slate about Russia’s 2016 election meddling.

Advertisement