This piece originally appeared on the Conversation.
Uncertainty continues to swirl around scientist He Jiankui’s gene-editing experiment in China. Using CRISPR technology, He modified a gene related to immune function in human embryos and transferred the embryos to their mother’s womb, producing twin girls.
Many questions about the ethical acceptability of the experiment have focused on ethical oversight and informed consent. These are important issues; compliance with established standards of practice is crucial for public trust in science.
But public debate about the experiment should not make the mistake of equating ethical oversight with ethical acceptability. Research that follows the rules is not necessarily good by definition. As He pushed ahead with human gene editing, how much he skirted the rules may not be his primary ethical failing.
A statement signed by 122 Chinese scientists proclaimed He’s work “crazy” and in violation of ethical standards. Is that really the case?
Scientists undertake medical research to generate knowledge that may one day be used to improve human health. This work can help determine new strategies for prevention and early detection of disease, or develop new drugs and new technologies for treatment, for example. Without investigating them, no one knows which preventive measures, diagnostic tools, or treatments are most beneficial. They need to be rigorously tested.
Ethicists tend to focus most on studies that ask a lot of human subjects because these usually carry the most risks for volunteers. Picture a drug study with participants taking an experimental medication, keeping a daily diary of symptoms and side effects, meeting frequently with a physician, and so on.
There’s a long history of abuse and misuse of human subjects in research, from medical workers withholding syphilis treatment from unsuspecting black men in Tuskegee, Alabama, so they could track the disease’s progress, to the deliberate infection of research participants with syphilis in Guatemala in the 1940s, to more recently the role of conflicted investigators involved in psychiatric research at the University of Minnesota. In recognition of the potential for abuse, all research undertaken in the U.S. in institutions like universities that receive public research funds or by companies seeking Food and Drug Administration approval for a product is overseen by various ethical and regulatory committees.
The ethical acceptability of research is contingent on an institutional review board’s judgment that the procedure has the potential for benefit that counterbalances risk of harm. Institutional review boards are typically internal to research institutions but are meant to be independent of investigators. The board also works to ensure that the process of informed consent is robust, such that participants are appropriately educated about the relevant risks of participation, are free from coercion to participate, and are aware of their ability to decline to participate without penalty.
Funders of research will also conduct scientific peer review of a protocol to ensure the quality of the research design. Poorly designed research is ethically problematic, since it wastes financial, human, and other resources that could be allocated to better-justified research.
Journal editors play an important gatekeeper role as well. Studies conducted without appropriate ethical oversight may not be reviewed for publication in journals that abide by the Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals adopted by the International Committee of Medical Journal Editors.
Concerns at any of these steps along the way can prevent health research from proceeding or from contributing to the scientific and medical literature.
When He presented his work at a session of the Second International Summit on Human Genome Editing in Hong Kong, many people raised questions about the informed consent process. Important as they are, the queries also seemed to be groping for a smoking gun—some clear violation of existing standards—in order to declare what people already felt: that the research was unethical.
Having those standards and discovering a violation of them make judgments of ethical responsibility feel straightforward and objective. A rule was broken; the research was unethical. Case closed. There are certainly questions about the adequacy of the processes He’s research went through. Were collaborators kept in the dark about its nature and aims? Was the experimental protocol and the informed consent process subjected to rigorous review by an independent oversight body? Was the consent process itself robust and not compromised by the interests of the researchers?
But by focusing heavily on these still-open questions, the scientific community risks implying that mere compliance with routines of oversight would have made it ethical. That approach fails to ask what is being overseen, what is being overlooked, and whether that matters to how we judge the ethical acceptability of an experiment.
It’s important to ask not only whether there was ethical oversight, but what it consisted of. Just because there has been a process does not mean that it is thorough or sufficient.
This is particularly important in the case of germ line editing, because it’s so unlike most conventional therapies. As the U.K. Nuffield Council has pointed out, it is incorrect to call it a therapy. If one were undertaking gene therapy in a baby, or even a fetus, to address a life-threatening genetic disease, it would be appropriate to accept a certain amount of risk, because the alternative is much worse: living with a life-threatening disease.
But in the case of embryo editing, there is not yet a child who is sick and needs to be healed.
Because the genome-editing molecules are delivered into the egg at the same time as the sperm, one brings the “patient” into being in the same moment as one undertakes the “therapy.” So, when the experiment is being contemplated, there is no child to heal.
Thus the parents’ desires and interests are the focus. They are the patients/research subjects that the ethical oversight process is primarily built to address. This is a problem: There is something missing in a process that fails to prioritize the interests of the resulting child(ren). Yet, since bringing them into being would involve risks that are significantly higher than normal reproduction, taking their interests into account may mean that the experiment simply should not be done.
In the case of the Chinese experiment, the situation is still more complex because the edit was made not to address a genetic disease that would otherwise affect the life of the resulting children but to protect them against an entirely hypothetical risk, namely exposure to HIV.
These are highly unusual scenarios, and existing ethical oversight, even when done extremely well, is poorly equipped to deal with them. Even if He’s experiment had satisfied all the questions of the reviewing oversight body, that may have been insufficient simply because that oversight body may not be asking (or, indeed, allowed to ask) the right questions.
One risk of locating ethics primarily in research oversight is that in cases like this, the focus tends to be on whether the research was ethically compliant—that is, whether it followed the rules—not on whether it was ethically responsible. In a profoundly novel case like this, it’s worth questioning not only whether the rules were followed, but what they are, and are not, designed to protect against.
He’s experiments push into radically new territory.
His work should cause people to ask hard questions about this technology, and its implications for human identity and for the integrity of foundational social relationships: parent to child, medicine to patient, state to citizen, and society to its members. Under what circumstances if any might it be appropriate to tinker in the genomes of our children-to-be?
It should also cause us to ask hard questions about our “technologies” of research ethics—the machineries of evaluation that experiments must pass through. Like any test, they are necessarily incomplete. Yet functionally they are the standard, the primary repository of ethical judgment. And there is no already-settled higher standard against which we can evaluate these processes.
The difficult task of setting standards for the standards belongs to wider society. Processes of ethical oversight for genome-editing research should ideally reflect society’s shared values and norms, not merely as they pertain to informed consent, but as they pertain to our sensibilities about the right ways to care for—and to bring into being—our children.
The crucial question is not what rules were broken, but what—and whose—judgments about what is right and appropriate should rule the human future. Deeming He “crazy” and a “rogue” does not answer the question of what went wrong. To answer that, we must all take a hard look at the potential limitations of current routines of ethical oversight. Are they asking the right questions—questions that those whose lives will be affected by these powerful new technologies would want researchers to ask? That is a question whose answer cannot come purely from within the hallowed halls of science but must be calibrated to the whole human community’s shared visions of the good.