Earlier this year, a Google engineer named Blake Lemoine made headlines for a particularly outlandish claim: After engaging in conversation with a highly sophisticated algorithm named LaMDA, he decided that the A.I. was in fact a sentient being, and as a result it deserved legal personhood. Since Lemoine made this claim, Google has fired him, and almost everyone has concluded that he is clearly wrong, but this clearly-wrong claim nonetheless launched a barrage of articles, many with the premise “Yes, but what if he wasn’t?”
Attention to this case isn’t surprising: A century of science fiction should be enough to demonstrate that we’re fascinated by the prospect of creating true artificial life. By this point, however, we ought to recognize that claims about the advent of new techno-religions tend to be—to use an industry term—almost entirely vaporware, with exactly none of the grassroots interest or staying power of the movements that are typically classified as religions. Anthony Levandowski’s much-hyped Church of AI, founded in 2015, officially closed last year (do religions “close?”) after several years of inactivity. Robotic priests, which have appeared in several countries, make for great dinner conversation, but their functionality has been greatly overstated—they are closer to Tickle Me Elmos than GPT3—and they exist because of a few idiosyncratic individuals, not mass demand, so they remain prototypes. To paraphrase an old Jewish joke: Nobody cares if you dreamed that you were a leader of a hundred hasidim. We care if a hundred hasidim dreamed that you were their leader.
The coverage of these truthfully rather small stories perpetuates the tech-centric narrative that everything is “unprecedented.” But many religious traditions have been considering the prospect of nonhuman sentience for literal centuries—and living algorithms are only one manifestation of questions both larger and deeper: What makes us uniquely human? How much humanity must an entity exhibit before we treat it like a person?
Reframing the LaMDA question like this feels like turning off a quiet side street and finding oneself on a four-lane highway, because it turns out that there are quite a few entities with claims to personhood, and some of these claims are taken quite seriously indeed. On the fringes, we have aliens and animals: just three days after the Washington Post reported on Lemoine’s claims, a New York’s highest court ruled that Happy, an elephant at the Bronx Zoo, cannot be considered a person, despite scientific evidence that she, together with a handful of other species, experience a sense of self. A few weeks before that, Congress held hearings on unidentified flying objects for the first time in half a century, largely because of potential national security implications but in the process making it more acceptable to soberly entertain evidence of extraterrestrial life.
In the center, of course, we have abortion. In the wake of Dobbs, conservatives across the country have advanced the notion of “fetal personhood,” according to which even fertilized eggs would retain the same legal protection as human beings. If enacted, such legislation could allow abortion providers to be prosecuted as murderers, threaten IVF, and endanger women even in clear-cut situations of life-threatening pregnancy. Alabama and Georgia have already passed laws classifying a fetus as a person, and a conservative Supreme Court is less likely to rule that such laws are unconstitutional. Despite its legislative success, however, fetal personhood does not grapple with the complexity of fetal development—including questions of viability or even awareness that the fetus exists in the first few weeks—and its proponents show little interest in exploring its legal implications for taxation, immigration, the census, or even using the HOV lane. As a result, it is hard to engage on fetal personhood as a philosophical question separate from its existence as a legal strategy for policing women’s bodies.
Abortion, like artificial intelligence, exerts a powerful gravitational force that makes it seem impossible to bring them up in conversation unless they are at the center—any way you do it weighs the whole set of ideas down. But in truth, all of these debates—even those about aliens and animals—engage a broader idea, and one that is worth spending time on: that the things we consider uniquely human may in fact be shared with beings from other worlds, beings we discover in our algorithms, human beings in gestational form, or even creatures we’ve long lived beside but underestimated. In the media, these have been covered as separate stories. From a religious perspective, they have long been connected.
All religions place enormous value on human life. In Judaism, which I study, this value is rooted in the idea that human beings are created in the image of God, which means that our worth is connected to something—exactly what is a matter of intense debate—that is essential to being human.
Now, there’s a way of interpreting this idea where human beings are only important inasmuch as they are unique, where abilities like reasoning, self-awareness, speech, and creativity are what separate us from other organisms. The problem is that making value contingent on uniqueness is pretty risky, because it ensures that human value will get attacked every single time an organism or algorithm surprises us by doing something “human.” When those things happen—when a computer creates impressive artwork, when an elephant displays grief—we can only keep humans feeling special by second-guessing the results (“Yes, a computer painted a shockingly beautiful picture, but is that really what humans are doing when they paint?”) even while we quietly but persistently redefine what makes us human in the first place, shrinking the boundaries of humanity down to just the qualities that (we think) are safe from imitation.
This is not a fun game to play because there is no way to win. Permanent skepticism about the personhood of animals, algorithms, and aliens might preserve our sense of self-importance for the moment, but it renders the concept of “human” nothing but a chain of disconnected islands, slowly sinking beneath the waves each time a computer or animal does something breathtaking. Beyond that, there’s the very real danger that shrinking the meaning of “human” will literally dehumanize people who don’t fit into the new definition. If a person’s work can be replicated by a machine—or even if we expect that machines will soon replicate it—that work will undoubtedly be afforded less respect; for this one need look no further than Amazon’s unironically-named Mechanical Turk. It is for similarly reasons that pro-choice arguments tend to focus on affirming female bodily autonomy rather than denying fetuses their humanity: One of the few philosophers who has advocated for abortion access on the basis of the fetus’ lack of cognitive capacity and self-awareness, also explicitly permits certain forms of infanticide.
Jewish thinkers never go down this path at all. For many rabbis, humans are valuable whether or not they are unique; not only could other beings share some of our “essential” human characteristics, but a few actually do. Rather than protectively shrinking from this expanded notion of humanity, rabbis have historically been very open to the idea of nonhuman sentience and have tended to see parallels between humans and nonhumans as an excuse to treat nonhumans better.
Evidence for this position isn’t hard to find. Take demons, for example. In rabbinic literature demons are not inherently evil; they are mortal beings with agency, sometimes imagined as the unintended offspring of human beings, and their existence doesn’t pose existential threats to human value. The rabbis also record the existence of an animal called “the man of the field,” which so resembled human beings (one modern rabbi speculated that it was an orangutan) that its corpse is afforded some of the dignity of human dead, and medieval German rabbis talked about vampires and werewolves—sometimes even reading them into the Bible—without any concern about what their existence might entail.
As for beings that aren’t imaginary, Jewish thinkers—like many modern animal rights groups—have tended to value them on a gradient, with animals above plants and plants above inanimate objects. The reason for this, in some strains of thoughts, is that all of these creations are ensouled; the human soul has extra pieces, but it shares much with other beings. In both the Bible and the Talmud, people are regularly criticized for treating animals badly, with the critique occasionally coming from the animals themselves.
The openness to the other life has even extended to life on other planets, which has the potential to radically diminish humanity’s sense of its own importance. Both Jewish and Christian thinkers have remained quite open to the possibility of extraterrestrial life, a few even arguing that a well-crafted universe ought to be a lot fuller. “Who will imagine,” wrote the medieval poet Jedaiah Bedersi, “that a wise manufacturer would prepare tools worth ten thousand talents to form an iron needle?” While a few modern rabbis—most notably the Lubavitcher Rebbe—have balked at the idea that aliens might have free will or agency, others have fully embraced the idea.
The most powerful example of all is the golem. The medieval golem isn’t a proto-robot, and it isn’t a parable about uncontrolled power. Instead, it’s something far more radical: It is a person, one who is brought into existence for the sole purpose of demonstrating that humans, like God, are powerful enough to create life. That humans and golems are essentially the same is the whole point; humans, for the rabbis, are also an artificial intelligence; the first being to be called a golem is Adam. Instead of diminishing human value, the possibility of making golems asks that people appreciate their true power and act accordingly.
Discussions about aliens, golems, and animals occupy very different parts of Jewish thought.
What brings them together is the belief that human value is axiomatic, and that it is precisely because of the unassailability of our value that our instinct should be toward expanding the idea of what is human when we recognize it in others. This idea has a very important corollary: because human value is the basis for valuing these near-humans, the latter can never supersede the former in importance. In other words, this model both allows us to be generous with the idea of humanity while resolving concerns that our own status will become diminished in the process.
Decades ago, the philosopher Peter Singer advanced the idea of the moral circle: Our notion of humanity has finally expanded to actually include people of all races and classes, and in the future it ought to include machines, animals, and even plants. This transformation won’t happen magically; actual people need to be on board with it, and they need the transformation to come with a plausible narrative about why it is happening only now. Religious language, with its emphasis on human value as a core, unshakeable idea, is a useful lens through which we can have a conversation that transcends—but does not replace—the boundaries of A.I. ethics, animal rights, even perhaps even abortion. If all signs indicate that we are entering an age of contested personhood, we ought to spend at least some time understanding at as such.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.