Genome editing has once again hit the headlines. This time it’s the report of the first CRISPR babies to be born: twins Lulu and Nana, whose genes have been edited in an attempt to confer immunity to HIV. From what we know, the process was ethically suspect: There wasn’t a clear medical need—the twins didn’t have HIV, and there are much safer ways to prevent people from getting it—the consent form provided to the parents was sketchy and opaque, and the research was done in secret. It’s no wonder the story eclipsed another recent development in the world of genome editing, which unfolded at an important meeting of the U.N. Convention on Biological Diversity.
The meeting focused on gene drives—a form of genome editing that can quickly spread genes throughout a population, ensuring that edited organisms’ offspring inherit the desired traits. Gene drive technology has potential environmental and conservation applications, such as eliminating invasive species that threaten ecosystems. Gene drives may even lead to eradicating diseases like malaria or dengue. Despite this promise, there’s concern that scientists will rush ahead and use gene drives before they’re ready for primetime. Critics worry that gene drives may become like the Sorcerer’s Apprentice: replicating beyond control in ways that are unintended, unforeseen, and could result in serious harm to the environment.
These worries came to a head at the convention’s meeting this past November, when countries discussed a proposed moratorium on field tests of gene drives. The ban was ultimately rejected, however, because it would delay beneficial research into gene drive applications that could help alleviate some of today’s most pressing issues in environmental conservation and human health.
But there’s another risk to gene drive research that has flown under the radar. This threat combines legitimate concerns about the safety of gene drives with skepticism about science and information warfare. Russia, or another U.S. adversary, could use the megaphone of social media to stoke worries about genome editing in the U.S. in a campaign timed with the next high-level meeting on gene drives. In fact, Russia has recently engaged in a disinformation campaign claiming—falsely—that the U.S. is developing biological weapons in neighboring countries, and it has also used state-funded news outlets to cast doubt in the U.S. about the safety of GMOs. These campaigns are concerning—they can impact national security, international relationships, and trade—yet haven’t received nearly the same level of exposure as discussion about misinformation campaigns designed to achieve political objectives. As a report prepared for the U.S. Senate shows, Russia used every major social media platform, including Snapchat, Pinterest, and Tumblr, to target specific demographic groups in an effort to influence the 2016 presidential election. Similar information warfare tactics could be used to exploit Americans’ lack of knowledge and opposition to particular forms of genome editing.
In a recent report on security and genome editing, we outline how a weaponized narrative on gene drive technology could unfold as part of a sophisticated information warfare campaign. It could start with an accidental release of mosquitoes that have been modified to contain gene drives. Maybe the lab is in New England and uses tropical mosquitoes to ensure that if they do escape, the harsh climate means that they can’t survive and spread the gene drives. Of course, even if the risks to the environment and human health and safety were low, there would be real concerns about such an accident: Were appropriate safety measures in place, did the researcher get the proper training, what are the environmental impacts, how can future incidents be prevented?
Legitimate news outlets could accurately report the story of the release, but it’s not hard to imagine Russia, or another country, pouncing and driving a false narrative claiming that the release’s ecological effects will be disastrous. Perhaps some tweets and Facebook accounts further claim that there is a risk of the mosquitoes spreading throughout the country and contaminating food crops with genetically engineered organisms. Russian state-funded news outlets could spin the story to highlight the “risks” of GMOs to human health and the environment, the role of greedy corporations and unaccountable philanthropies in funding GMO research, and the lack of effective government oversight in genome editing. Because the mosquitoes have been genetically modified, these same news outlets link these “risks” to the mosquitoes’ release. It’s perfectly plausible that this narrative could move from fringe social media accounts to more mainstream science doubters, who might then amplify the message.
Such a campaign would be designed to fuel false stories, undermine the public’s trust in science, and reduce the U.S.’s national economic competitiveness by dealing a blow to life sciences research. It could reduce public support, and possibly funding, for basic research in gene drives. At the international level, it is possible that Russia could erect trade barriers under the World Trade Organization’s sanitary and phytosanitary measures, which “protect human or animal life or health within the territory of the Member from risks arising from additives, contaminants, toxins or disease-causing organisms.” Such barriers could be erected in order to gain political or economic advantage even if the risk is unwarranted. Singularly, or in combination, these outcomes could slow scientific developments that could increase human health and well-being.
Major social media outlets have taken steps to address the issue of false information distributed on their platforms. For example, Facebook purports to be deploying a mix of “technological solutions, human reviewers, and educational programs that promote news literacy.” Twitter is taking action against bot accounts by targeting applications that automate activity on the platform and decreasing the visibility of “spammy” posts. Media platforms and communications research centers should examine strategies to identify and manage misinformation specific to security and emerging technologies in the life sciences. Social media companies that use human fact-checkers should also ensure that these individuals have the skills and expertise necessary to review material in the life sciences, particularly CRISPR genome editing.
The International Fact-Checking Network, which certifies third-party fact-checkers used by social media companies, could serve as an avenue for facilitating such expertise. While such measures could help curb false information on social media platforms, this isn’t an easy fix. Patrolling social media is difficult, and companies are probably unwilling to commit the resources necessary for such expertise. For small platforms trying to break into the social media game, the necessary resources may simply be out of reach. It also remains an open question as to how much power democratic societies want to give private companies to police their content. The fundamental democratic ideal of the marketplace of ideas may come into conflict with what seems like common sense security measures.
As a result, much of the responsibility in combating misinformation on these platforms rests with the end users. Misinformation campaigns surrounding scientific innovations like CRISPR and gene drives benefit from readers’ unfamiliarity with cutting-edge research. We can’t expect everyone to become biosecurity experts overnight or unplug from the internet, but cultivating media literacy among social media users will be important in ensuring future misinformation campaigns fail to gain momentum. The drum of cultivating media literacy has been pounded for years, so there’s good reason to be skeptical that things will change. But tweets, posts, and stories that seem to reside on the edge should be read with caution and cross-checked with more reputable news sources.
Lots of journalists report scientific breaking news in ways that are rigorous and accessible. Readers would do well to use them as their go-to sources for their science and tech news. That isn’t to say that social media platforms can’t play an important role in communicating scientific developments. They can. But a healthy dose of user skepticism may help mitigate the risk of weaponized bionarratives damaging important scientific advances.