Scientists from computer guru Bill Joy to Astronomer Royal Martin Rees have wondered whether humanity is morally equipped to cope with the challenges of the 21st century. The list of potential ways in which humans can cause severe harm—and maybe their own self-destruction—on a planetary scale is steadily increasing. Now we don’t just have to worry about nuclear war and environmental collapse; we also have to fear the misuse of genetic engineering, nanotechnology, and robotics. Many of these new technologies are susceptible to abuses by small groups or just extreme individuals. Rees, for one, suggested in his 2003 book Our Final Hour that there is a 50:50 chance that this century will be humanity’s last.
With the stakes so high, the science of morality has recently been touted as a way to help prevent morally deficient humans from triggering their own demise. Oxford University associates Julian Savulescu and Ingmar Persson argue that human beings possess a moral psychology evolved for life in small communities with rudimentary technologies. In their 2012 book Unfit for the Future: The Need for Moral Enhancement, they argued that people can—and should—overcome the limitations of their own moral psychology with the help of technology.
In its broad outlines, the idea of moral bioenhancement is as follows: Once we understand the biological and genetic influences on moral decision-making and judgments, we can enhance (read: improve) them with drugs, surgery, or other devices. A “morality pill” could shore up self-control, empathy, benevolence, and other desirable characteristics while discouraging tendencies toward violent aggression or racism. As a result, people might be kinder to their families, better members of their communities, and better able to address some of the world’s biggest problems such as global inequality, environmental destruction, and war.
In fact, the attempts of parents, educators, friends, philosophers, and therapists to make people behave better are already getting a boost from biology and technology. Recent studies have shown that neurological and genetic characteristics influence moral decision-making in more or less subtle ways. Some behaviors, like violent aggression, drug abuse and addiction, and the likelihood of committing a crime have been linked to genetic variables as well as specific brain chemicals such as dopamine. Conversely, evidence suggests that our ability to be empathetic, our tolerance of other racial groups, and our sensitivity to fairness all have their roots in biology. Assuming cutting-edge developments in neuroscience and genetics have been touted as able to crack the morality code, the search for a morality pill will only continue apace.
To be fair, the effects of moral enhancement are usually subtle. At present, many laboratory studies involve comparing the reactions to a placebo pill with those of a drug treatment that impacts, via a brain chemical, the moral decisions of volunteers; their effects on a scale of 1 to 10 have been described as a change from a 3 to a 4. Nevertheless, this line of thinking prompts concerns about manipulation and dignity. Already, it is involuntary subjects—such as psychopaths and criminals—who are on the front-lines of the latest experiments in moral enhancement technologies. Many jurisdictions, for example, advise pharmacological treatments to reduce sex offenders’ sex drives.
Moreover, some drugs widely prescribed to people suffering from mental illness, such as the anti-depressant citalopram and the anti-anxiety drug lorazepam, also affect specific brain chemicals involved in moral judgments, and therefore have (largely unintended) moral-enhancement effects. And these effects have also been observed in otherwise healthy individuals taking pharmaceuticals for mundane reasons. For example, the combined oral contraceptive pill increases oxytocin secretion, which has been associated with trust, cooperation, and generosity, while propranolol, commonly prescribed for high blood pressure, appeared to reduce implicit negative reaction to other racial groups. Given the recent developments in neuroscience and genetics, it does not seem unreasonable to suppose that the potential for influencing moral behavior will only increase.
If you are cringing at all of this, you aren’t alone. The wider public seems to recoil at the idea of morality pills—and some of the most prominent neuroscientists studying morality agree for three primary reasons.
The first is linked to the complexities of moral decision-making. There is no one right answer to any moral dilemma, such as the famous trolley problem, in which a runaway trolley is hurtling down a track toward five trapped workers. Consequentialists, who focus on the outcomes of moral action, would permit pushing a fat man in front of the trolley since his body would halt its advance, killing him to save five. Deontologists disagree, holding that some choices are morally forbidden no matter their effects. As it happens, neuroscientists have found that some drugs make healthy volunteers more likely to push the fat man and others less likely to. The question of whether taking one of these drugs morally enhances people depends on whom you ask.
But even if we arrive at a tentative consensus concerning morally desirable traits—say, that empathy plays an important role in our moral orientation toward others—and measures to boost them, we cannot stop this at some threshold level or entirely control the consequences thereof. Taken to their logical extremes, moral impulses have the capacity to go beyond moral behavior. Empathy, for example, can be used for deceptive purposes. It might also cloud our judgment, making us excessively eager to punish those who have done wrong because we identify with their victims. Human values will continue to generate conflicts that have no correct solution.
A second, related issue is linked to the problem of trust and the potential abuses of moral bioenhancement. Implicit in the idea is that some elite group—whether neuroscientists, corporate executives, or policymakers—would claim to know some moral truth and then issue rules for the unenlightened to follow. But as many have pointed out, the regulators themselves are not always trustworthy. They might be incompetent, have conflicts of interest (such as a vested interest in selling a particular drug or device), or set out to deliberately mislead the public.
But perhaps the most fundamental problem is the link between morality and a sense of self, what Jeremy Waldron has called “an individuals’ awareness of her own worth as a chooser.” Aristotle singled out the capacity for moral reasoning as what differentiates human beings from plants and nonhuman animals. Immanuel Kant pushed this idea further, tightly linking the moral competence of humans and their freedom, and making this the source of the special status and dignity of human beings. In popular consciousness, too, the link between identity and the choice between moral and immoral behavior is central to the very idea of being human.
Today, we are increasingly aware that new developments in science and technology bring with them increased moral responsibility. Biomedical moral enhancement seeks to capitalize on these new developments to make us better human beings and, in this sense, is an example of how scientific progress seeks to manage some of its costs. But by downplaying the relationship between morality and freedom, there is a danger that we could undermine the moral learning that goes on when we think actively about the validity of our own intuitions.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.