Online speech has connected us in unimaginable ways, with new ways to stay in touch with one another, social graphs that can amplify exactly what our friends (or advertisers) want us to see, and faster-than-ever information about breaking stories. But it’s also brought us foreign election meddling, online harassment, news that moves too fast to verify in real time, and other threats to personal and national security.
On Jan. 30, Future Tense brought together politicians, technologists, and researchers to discuss the challenges presented by online speech, a driving influence of modern life. The problems aren’t simple—and neither are the solutions.
One big question of the event was whether the government should step in to regulate private platforms, especially given concern over the influence sites like Twitter and Facebook had on the 2016 election. This is particularly true in the case of paid political ads, which aren’t currently regulated like their print, television, and radio counterparts.
That’s something Sen. Amy Klobuchar, D-Minn., is currently trying to fix. She is a co-sponsor of the Honest Ads Act, an amendment to the 2002 Bipartisan Campaign Reform act that, if it passes, will require large-scale digital platforms to maintain public records of all election-related content purchased by a group or person who spends more than $500.00 on a platform.
“I strongly believe that with only 280 days to go [to the midterms] we just can’t sit back and admire this problem, that we have to take action,” said Klobuchar. “Russia and other countries are emboldened when we do nothing in the face of clear evidence.”
Rep. Ted Lieu, D-Calif., said that he too supports disclosure when it comes to political speech. But he emphasized that he wouldn’t want the government to “regulate” tech companies at large on matters of free speech. Instead, he said, he prefers a “light touch,” using existing laws.
“It’s not illegal to let people know your opinion. What was illegal was if you stole emails or stole information from a campaign and then you put that out. … So why would it be any different if they can do that on Twitter or Facebook. I mean is it just a scale right that we’re concerned about?”
While most speakers agreed that, in principle, Google, Facebook, and Twitter aren’t too big to regulate, there was disagreement on how much we should expect, and trust, the companies to address these problems themselves. Recent news that Facebook and Google intend to hire more human workers to vet content was regarded by most speakers as a good thing, for instance. But as Klobuchar noted, when companies self-regulate, their practices aren’t consistent with one another, leading to varying results.
“There are questions of democratic accountability because … [these] speech engines … have changed our norms around what we expect to be able to say,” said Kate Klonick, a Yale Information Society Project fellow (and a Future Tense fellow).
Klonick pointed out that even in the European Union, where there is an attempt to regulate transnational internet companies and design standards that impact everyone, efforts such as “right to be forgotten” laws still cause contention. The porous borders online mean that one country’s regulations can set a precedent other nations might not want to follow. (For more on that, read our Futurography package about global internet governance.)
The internet has fundamentally changed modern ideas of free speech. Whitney Phillips, a professor at Mercer University who has written extensively about online trolling, said that the idea that we need to “protect … the worst kinds of speech” online came from early hacker ethics. But the problem, she explained, is that that kind of mentality often works to silence groups of people who aren’t “the loudest, most harassing.” Without consistent norms and a strong definition of online harassment, the problem has gotten more intense as online communities have grown and diversified.
“People have a hard time following rules when they don’t know what those rules are. And if you’re assuming that most or many people who use the platform are operating in good faith, they would want to follow the rules,” said Phillips.
One problem is that these platforms are often designed by companies that aren’t nearly as diverse as their user bases. Caroline Sinders, who researches harassment as a product analyst at the Wikimedia Foundation, suggested one solution for this: product design process that incorporates the viewpoints of people who face abuse on platforms like Facebook. “We have to determine or rather have a more community buy-in of what are those tools look like and who gets to make them right,” Sinders said.
That approach could help address online speech problems beyond harassment. Dan Gillmor, director and co-founder of the recently launched News Co/Lab at Arizona State University, also highlighted the need to involve and educate users to spot unreliable information. (Gillmor strongly encourages people to avoid using the term fake news.) Rather than putting the burden on tech companies to label their content, he called for greater civic education.
“We have to upgrade ourselves. This is not just about upgrading journalism or making companies do better with processing information, but we have to do this for ourselves. And we have to help people find and understand and act on and create useful information and news and to share it with integrity,” said Gillmor
Support our journalism
Help us continue covering the news and issues important to you—and get ad-free podcasts and bonus segments, members-only content, and other great benefits.Join Slate Plus