A Slate staff writer who regularly reports on Christianity responds to Andrew D. Hudson’s “A Priest, a Rabbi, and a Robot Walk Into a Bar.”
The Michigan-based company Covenant Eyes markets itself to Christians who want to stop viewing pornography. Its software takes screenshots of a user’s screen activity, uses A.I. to scan it for pornographic imagery, and then sends regular reports to the user and a designated “ally” who has agreed to hold him accountable. The company’s name comes from a Bible verse that reads, “I made a covenant with my eyes not to look lustfully at a young woman.”
Everyone wants technology to reflect their own worldview, and religious conservatives are no exception. In Andrew Dana Hudson’s short story “A Priest, a Rabbi, and a Robot Walk into a Bar,” a pair of mostly secular tech experts confront the limits of their own “progressive, tech world sensibility”—both morally and in the marketplace.
When the reader meets David, the rabbinical school dropout is being interviewed by an Austin startup to train its customer service chatbots to avoid anti-Semitic language. Those chatbots had been programed to “learn” from humans, but along the way they were picking up subtle conspiracy language from angry, obsessive customers. David’s job is to “sort out the good Jewish-stuff from the bad Jewish-stuff,” as he puts it.
At David’s new gig, he befriends Mark, a onetime future priest who had become a “traitorous Proddy” and landed in the Texas tech scene. They both gravitated toward the secular world but retained an intuitive understanding of religious language and values. It’s an unusual combination of expertise, and like all good entrepreneurs, they soon figure out how they can monetize it, in the form of a business, Decen.cy, that helps companies “groom” their bots to be more well-behaved. They play up their religious background to clients—David displays a yarmulke, Mark a clerical collar. But their performance of piety is challenged when a client asks them to program A.I. to do more than just avoid offense. Frank Teller is the ambitious pastor of a Texas megachurch so successful that many of its members live in “intentional communities” owned by the church. The church uses customer service chatbots to manage basic community requests. But Teller doesn’t like it when a bot recommends “reflection and mindfulness” in response to his granddaughter’s request for prayer. He wants David and Mark to design a custom A.I. that incorporates conservative evangelical language and priorities: “We need A.I.s that share our values.”
Human preferences and values are baked into every tool we create, including A.I. But it’s one thing to acknowledge the stubbornness of bias and another thing to add it on purpose. Decen.cy considers its own team’s values to be mainstream and anodyne. Mark and David are much more comfortable debating whether yoga is culturally appropriative than programming a bot to recommend prayer in response to a mental health crisis. Teller even wants to track his flock’s adherence to ideals like humility and chastity. David fears his company is being hired to create an “A.I.-policed theocracy.” Is it right to take on a client who doesn’t share its progressive values, and who in fact spouts the kind of “low-key anti-Semitism” that David was first hired to squelch?
The problem of A.I.’s ability to buttress or erode religious values is one that religious conservatives are already grappling with on their own. Earlier this year, the Southern Baptist Convention, the country’s largest Protestant denomination, published a statement on artificial intelligence that struck a cautiously optimistic tone. “When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him,” its authors wrote. “A.I. should be designed and used in such ways that treat all human beings as having equal worth and dignity.” The report also says that the technology should not be used for sexual pleasure, to wage war without human oversight, or to “distort truth through the use of generative applications.”
Hudson’s particular vision of the tech-enabled shift in spirituality portended by Teller’s request is chilling. But religiously motivated uses of A.I. and surveillance technology are inevitable—in fact, plenty already exist. One company provides facial recognition technology to churches so they can keep track of member attendance. Smartphones are a fixture in church services, many large churches have their own apps, and millions of people read the Bible on screens instead of in print. The Church of England recently developed an Alexa “skill” that reads prayers, answers questions like “Who is God?,” and helps users locate nearby churches. The Vatican now sells a $110 “eRosary” bracelet that encourages Catholics to pray and logs their progress as they do so.
Some of these innovations are gimmicks, but others suggest sea changes in the way people experience religious faith, both privately and in community. Gamifying prayer with a Fitbit-style Rosary might it easier to stick to habits of devotion, or it might turn intimate spiritual practices into yet another digital chore. What does it mean to pray with Alexa? Will it someday “answer” individual prayers by suggesting certain products and resources? And will voluntary religious communities be tempted to use surveillance technology to hold members accountable—with or without their consent—on attendance, donations, or personal issues like pornography use? Everyone wants “A.I.s that share our values,” as Teller tells Mark and David. The only question is whose values, and to what end.
This story and essay, and the accompanying art, are presented by AI Policy Futures, which investigates science fiction narratives for policy insights about artificial intelligence. AI Policy Futures is a joint project of the Center for Science and the Imagination at Arizona State University and the Open Technology Institute at New America, and is supported by the William and Flora Hewlett Foundation and Google.