It’s OK to Be a Luddite

Mocking people who fear technology’s dehumanizing creep is easy. Here’s why they have a point.

Illustration by Rob Donnelly.

Illustration by Rob Donnelly

Technology will save us! Technology sucks! Where today’s techno-utopians cheer, our modern-day Luddites, from survivalists to iPhone skeptics to that couple that dresses in Victorian clothing and winds its own clock, grumble.

Understanding the former urge is pretty easy: It’s a fantasy of a perfect world. The Luddite impulse, however, isn’t so clear—and we shouldn’t automatically dismiss it as one that scapegoats technology for society’s ills or pines for a simpler past free of irritating gadgets. Rather, today’s Luddites are scared that technology will reveal that humans are no different from technology—that it will eliminate what it means to be human. And frankly, I don’t blame them. Humanity has had such a particular and privileged conception of itself for so long that altering it, as technology must inevitably do, will indeed change the very nature of who we are.

To understand the appeal of being a Luddite, you need only read these words of Leon Trotsky:

To produce a new, “improved version” of man—that is the future task of Communism. Man must see himself as a raw material, or at best as a semi-manufactured product, and say: “At last, my dear homo sapiens, I will work on you.”

This vision, promptly disposed of by Stalin, is so intuitively unappealing that even with the return of authoritarianism to Russia, neither Vladimir Putin nor any of his associates have revived the idea of scientifically perfecting man. Such language brings back bad memories of eugenics, Nazi experiments, Tuskegee, and worse. Yet those fears don’t stop us from using technology to become those new, improved versions of ourselves—from buying up iPads and smartphones and storing the digital residue of our lives in the cloud. In reaction, modern-day Luddism arrives in a variety of forms, many of them vulgar. Writers from Neil Postman to Jerry Mander, in books like Amusing Ourselves to Death and Four Arguments for the Elimination of Television, blame technology for making us brain-dead sheep; the solution, of course, is to eliminate it. (Spoiler: That’s not going to happen.) But more thoughtful writers, like philosophers Wilfrid Sellars and Willem deVries, recognize that squaring our conceptions of ourselves with what science and technology tell us entails some pretty unsettling revelations.

At its origins in the beginning of the 19th century, Luddism was not about technology’s evils—it was about worker rights and a fear of job losses. Angry English workers marched together and destroyed machinery in what was essentially a vigilante labor movement, only for many of them to be tried and executed by their government. (The history is covered well by E. P. Thompson’s classic The Making of the English Working Class.) Over time Luddism’s definition became more puritanical, boiling down to a worldview that was anti-technological in general. But ruminating on nature (as in the beautiful, evocative books of Robert Macfarlane and Tim Robinson), hating nuclear weapons, or preferring birdwatching to Clash of Clans doesn’t make you a Luddite. While contemporary Luddism fixates on the evils of technology, it’s not driven by the threat of technology supplanting or replacing humanity. Rather, Trotsky’s quote reminds us of the possibility that we will come to see ourselves as no different from machines. Technology doesn’t dehumanize us; it’s the knowledge behind it that does. Fighting the machine, then, is fighting a vision of the future in which the human is the machine.

Luddism is not nostalgia for the past. There is so much wrong with the past that it’s practically an argument against Luddism in itself. Even the supposed evils of technology too often turn out to be evils baked into the soul of humanity. Hannah Arendt dwelt on the mechanized totalitarianism enabled by the industrial revolution, claiming it had made the Nazis possible. Yet a host of low-tech atrocities, from the Armenian genocide to the Rwandan genocide and countless third-world dictatorships, have shown that organized terror and slaughter can be achieved with no more technology than a radio and a machete. If you truly wish to go backward, you must go back beyond the invention of what a friend termed the most harmful human invention of all time: agriculture. Agriculture not only enabled the exponential growth of the population of suffering souls, but also set the scene for tyranny, slavery, and every other atrocity that recurs throughout human history. But unless you’re willing to go that far, to be a Luddite you only have to advocate against future technology, not for a return to the past.

Consequently, the Luddite impulse is to embrace a certain distinction between human and machine. Thomas Pynchon put his finger on it in 1984 (“Is It O.K. To Be a Luddite?”) when he wrote that the midcentury Luddite impulse, embodied particularly in science fiction, embraced “a definition of ‘human’ as particularly distinguished from ‘machine.’ ” “Humanity” was held up as an incommensurable yardstick: You either had it or you wanted it. In Star Trek, the android Data rose to gain humanity, while those who were assimilated by the Borg ceased to be human.

Sometimes this distinction did not favor the human; in the 1960s, gloomy science-fiction writers started to point the finger of blame not at technology but at humans themselves. But whether humans were better or worse than technology, they were always different from it. It was rare that humans were superseded and made redundant, as in the science fiction of Olaf Stapledon and Stanislaw Lem, or that they ran toward their own negation, as with the J. G. Ballard heroes who deny the human, abet the apocalypse, and have sex with cars. (Ballard is the anti-Luddite par excellence.) Otherwise, the human (or an alien surrogate for humans) tended to remain at the center of the picture. “Feelings,” “values,” “creativity,” “culture,” and other such “human” qualities serve as a barometer to separate us from everything else, whether it’s animals, robots, or textile machinery. And membership has its privileges: Humans are entitled to freedom and equality and fraternity, not to be used as a means to an end. We “dehumanize” those we consider inferior, whether they’re slaves or Jews or those-jerks-across-the-river. The original Luddites objected to capitalists treating them as interchangeable labor; today’s Luddites see technology as threatening the value we assign to each individual life and collapsing them into utilitarian statistics.

This human exceptionalism has its roots far back in Renaissance humanism, when the cosmos was separated into three clear tiers: nature at the bottom, man in the middle, and God at the top. Man’s position was unique, rooted in the possession of a soul and certain unique qualities that flowed from it. Nature might be a clockwork machine, but humans most certainly were not. With the exception of a couple of radicals like 18th-century philosopher La Mettrie, author of L’homme machine (Machine Man), thinkers were generally hesitant to erase that line; even atheist-materialist Karl Marx described humans as an absolutely singular species.

When you start talking about fine-tuning and improving the human, you move toward treating humans as tools and raw materials—in other words, how we treat machines and animals. Unfortunately, once you start down the road of medicine and transplants and heart monitors and antidepressants and biotechnology, it becomes very hard to stop, even with strict ethical guidelines. Tools can be fixed, and damned if most of us aren’t broken, so things like designer babies and Clones-R-Us become simultaneously appealing and horrifying. This unease is where Luddism begins to have its pull on most of us, because it eats at our previously robust sense of the human.

There aren’t too many human universals, but those that exist are powerfully resilient: family and parenting, emotions, language, some kind of morality, taboos, art, some version of gender roles. While the specifics of each vary across time and cultures, the abstract categories have remained fairly robust. Those who strove to aggressively “improve” the human, from Plato to Campanella to Trotsky, often advocated the breakup of the nuclear family. The parent-child bond has remained mostly intact through generations of sweeping changes. Single-motherhood, same-sex marriage, and polyamory are pretty mild changes to the general model. But while you can argue that evolution has aligned us with certain conceptions of family, it’s a naturalistic fallacy to say that they’re necessarily better for us. Rather, these conceptions are markers of humanity. So is gender: Many argue for changing our ideas around gender roles, but few argue for eliminating the very concept of gender. To hold on to family, gender, and other human signifiers, simply because they’re human signifiers, is to be a Luddite. Giving up any one of them—probably with technological assistance—is taking a step toward seeing the human as an empty container for whatever you want to pour into it, no different from a computer running software.

There is one special human universal that deserves attention: death. The process of aging and death is one of the very few absolute constants of human existence and its eradication would in itself spell the end of “the human” as we know it. At the hypothesized point of the Singularity, when we upload our minds to the cloud and leave our bodies behind (or create new ones), the entire notion of an indivisible and distinct human individual will disappear, and we’ll become indistinguishable from software (or hardware). We’ll turn, as the science-fiction writer John Sladek suggested, from oranges into orange juice: “We [will] become ‘etherized,’ in both of Eliot’s senses of the word: numb and unreal.” If that makes you nervous—if it is more threatening than the prospect of inevitable death—then you are a Luddite. (I wouldn’t mind a few extra centuries, but the whole uploading thing spooks me.) Cordwainer Smith took this idea a step further in his own science-fiction stories of the 1950s, in which “The nightmare of perfection had taken our forefathers to the edge of suicide,” and so humanity brings back politics, money, multiple languages, diseases, and death in the ultimate Luddite movement, termed the Rediscovery of Man.

For the thinking Luddite, technology becomes a serious threat only to the extent that it threatens to collapse the boundary between human and machine. I think it very unlikely that this boundary will collapse in practice any time soon, despite the predictions of the transhumanists who cheer on the Singularity. These Luddite fears, however, are going to inform every step that we take toward “improving” the human. Already, psychology has taken a beating from neuroscience and pharmacology, making us wonder how rational and free we actually are. Our latitude to hold on to our traditional ideas—to be an intelligent Luddite, in other words—may lie in the extent to which we remain ignorant of scientific realities about the world and gain the capacity to manipulate them. A philosopher like Patricia Churchland aggressively and convincingly argues for the total replacement of “folk psychological” statements like “I’m stressed out” with a sentence like “My amygdala is overexerting control over my cerebral cortex,” but abandoning our current language of emotions is something many, including myself, see as a real and even dangerous loss. We’re dumb enough to believe that humans are still special—for the time being.

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.