Let’s start by making a few things clear: The novel coronavirus is not a bioweapon manufactured by the U.S. government. Drinking bleach will not cure COVID-19. Pharmaceutical companies didn’t manufacture the pandemic in order to cash in on eventual vaccine sales, nor did Bill Gates. The U.S. has not announced a national quarantine.
In recent weeks, false information about COVID-19—often spread intentionally by state actors with the intent to cause harm, and amplified by overwhelmed and fearful users—has spread about as virally as the virus itself.
In its wake, that false information has left confusion and a big question: What can social media platforms, individuals and governments do about it?
That question was central to “Confronting Viral Disinformation,” Future Tense’s latest web event in our yearlong Free Speech Project series, which is examining the ways technology is influencing how we think about speech.
When we’re considering information that is provably false, we can think about information that falls into two buckets: misinformation and disinformation. The difference, said Nathaniel Gleicher, head of cybersecurity policy for Facebook, lies in intent. With misinformation, “you don’t know the intent of the actor behind it,” he said, whereas with disinformation, the actor “intended to spread it to deceive.”
To address false information, Gleicher explained, Facebook uses what’s known as the “ABC” framework. The framework (developed by cyber conflict and digital rights expert Camille François) considers actors, behaviors, and content as the “key vectors characteristic of viral deception”—each requiring different approaches and responses.
In a nutshell, the challenge is that while the Russian Internet Research Agency, a scammer trying to make money, and your hapless great uncle Bill may all be spreading deceptive information, they all represent different problems that require different reactions, Gleicher said.
In the context of COVID-19, Gleicher explained, Facebook is grappling with two broad categories of misinformation and disinformation. The first, which Facebook is most focused on, is content that is “provably false, … has been flagged by a global health expert like the WHO and could lead to imminent harm”—for example, the idea that drinking bleach cures coronavirus. That kind of information, once identified, is immediately taken down.
The second category of misinformation and disinformation is more complex, Gleicher said, “because part of it is just people trying to figure out what is happening.” The content in this category is speculative and fuzzy, dealing with questions like the implications and source of the virus and its spread, and what governments are or aren’t doing in its wake. A key part of tackling this second category is ensuring that accurate, authoritative voices are amplified and heard, providing a trusted counterweight to the myths and conspiracy theories. Think the World Health Organization, public health professionals, and our friend and savior Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.
But such trusted experts also can be targets for bad actors online.
“My worry is that those very important public health officials who have our attention—and should have our attention—will be beset by cyber mobs trying to chase them offline, to discredit them and to silence them,” said Danielle Citron, a professor at Boston University School of Law, a MacArthur fellow, and the author of Hate Crimes In Cyberspace.
And there is an ongoing risk that deceptive information online will wreak havoc offline.
“I worry about situations where suddenly everyone starts saying, ‘Don’t go to Hospital X, but … go to Hospital Y because they have this kind of testing or this kind of access to certain kinds of facilities,’ ” when such claims are actually not true, said Jennifer Daskal, the faculty director of the Technology, Law, & Security Program at American University Washington College of Law, who moderated the online discussion.
Gleicher said that dealing with that kind of situation is deeply challenging, particularly because information is shifting so quickly, and “something might actually be true one moment and untrue the next.” It may, after all, be true that Hospital Y has key supplies. But once large numbers show up, that is no longer the case. And monitoring accuracy in real-time poses obvious challenges.
But Gleicher said that Facebook is taking steps here. For example, it has banned ads for medical face masks, hand sanitizer, disinfecting wipes, and COVID-19 test kits, and has provided free advertising to global health organizations. The platform has also placed a “Coronavirus (COVID-19) Information Center” at the top of News Feed. On WhatsApp, where limiting the spread of false information is complicated by encryption, the company has launched a health chat feature in partnership with the World Health Organization. The company relies on a combination of outside factcheckers and researchers, as well as its own tools, to proactively identify and limit the spread of deceptive information. It’s also worked over the past couple of years to reduce the virality of deceptive messages on WhatsApp, for example, by limiting the number of times a message can be forwarded and labeling it when it is forwarded.
You might look at Facebook’s fight against false information on its platforms in the context of an “attacker-defender” model, Gleicher said.
“An old insight from military strategy is that defenders tend to win when they can control the terrain, and attackers tend to win when defenders don’t,” he said. “The communications mediums we are using, in a very fundamental way, are the terrain of this conversation.”
Structural changes to platforms that provide context (for example, how many times a message was forwarded, or which country a page is being managed from) can shift the terrain to our advantage.
But private companies can’t be the only ones shifting the terrain to defeat misinformation and disinformation. The panelists agreed that governments also have a part to play in mandating more transparency and restricting things that amount to forged communications.
“There’s all sorts of ways we can use digital technologies to create forgeries, to show people doing and saying things that they never did and said in ways that cause … cognizable harms, economic, reputational, emotional harm,” said Citron, who is working with lawmakers to develop legal frameworks to address such digital forgeries. That’s a complex process, Citron explained, because in drafting those laws, you have to “be really narrow and careful.”
Regulation, agreed Daskal, needs to be “incredibly specific and incredibly clear, because when we start getting into the realm of things like misinformation or hate speech … there’s a real risk of over-inclusiveness … and that has some significant and pretty critical chilling effects.”
And, as the panelists highlighted, it’s not just governments and social media platforms that have an obligation to make sure your great uncle knows that garlic, hot baths and regularly rinsing your nose with saline won’t prevent the coronavirus—it’s also on all of us.
In an “atmosphere of deep distrust of institutions and also truth decay,” said Citron, we all have an obligation to “have a dialogue with ourselves” about the veracity of the information we share.
And in our own efforts to spread accurate, helpful information to our friends and family members, Gleicher said, “just starting with some empathy” can go a long way.
Watch the full discussion and read more from the Free Speech Project.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.