This article arises from Future Tense,a collaboration among Arizona State University, the New America Foundation, and Slate. A Future Tense conference on whether governments can keep pace with scientific advances will be held at Google D.C.’s headquarters on Feb. 3 and 4. (For more information and to sign up for the event, please visit the NAF Web site.)
Mary Shelley’s 1818 novel Frankenstein, or the Modern Prometheus, is generally considered the first work of science fiction. It explores, in scientific terms, the notion of synthetic life: Dr. Victor Frankenstein studies the chemical breakdown that occurs after death so he can reverse it to animate nonliving matter. Like so many other works of science fiction that followed, Shelley’s story is a cautionary tale: It raises profound questions about who should have the right to create living things and what responsibility the creators should have to their creations and to society.
Think about that: Mary Shelley put these questions on the table almost two centuries ago—41 years before Darwin published The Origin of Species and 135 years before Crick and Watson figured out the structure of DNA. Is it any wonder that Alvin Toffler, one of the first futurists, called reading science fiction the only preventive medicine for future shock?
Isaac Asimov, the great American science fiction writer, defined the genre thus: “Science fiction is the branch of literature that deals with the responses of human beings to changes in science and technology.” The societal impact of what is being cooked up in labs is always foremost in the science fiction writer’s mind. H.G. Wells grappled with creating chimera life forms in The Island of Doctor Moreau (1896), Aldous Huxley gave us a heads-up on modified humans in Brave New World (1932), and Michael Crichton’s final science-fiction novel, Next (2006), brought the issues of gene splicing and recombinant DNA to a mass audience.
What’s valuable about this for societies is that science-fiction writers explore these issues in ways that working scientists simply can’t. Some years ago, for a documentary for Discovery Channel Canada, I interviewed neurobiologist Joe Tsien, who had created superintelligent mice in his lab at Princeton—something he freely spoke about when the cameras were off. But as soon as we started rolling, and I asked him about the creation of smarter mice, he made a “cut” gesture. “We can talk about the mice having better memories but not about them being smarter. The public will be all over me if they think we’re making animals more intelligent.”
But science-fiction writers do get to talk about the real meaning of research. We’re not beholden to skittish funding bodies and so are free to speculate about the full range of impacts that new technologies might have—not just the upsides but the downsides, too. And we always look at the human impact rather than couching research in vague, nonthreatening terms.
We also aren’t bound by nondisclosure agreements, the way so many commercial and government scientists are. Indeed, a year before the first atomic bomb was built, the FBI demanded that the magazine Astounding Science Fiction, recall its March 1944 issue, which contained a story by Cleve Cartmill detailing how a uranium-fission bomb might be built. Science-fiction writers began the public discourse about the actual effects of nuclear weapons (see for instance Judith Merril’s classic 1948 story “That Only a Mother,” which deals with gene damage caused by radiation). We also were among the first to weigh in on the dangers of nuclear power (see for example Lester del Rey’s 1956 novel Nerves). Science fiction is the WikiLeaks of science, getting word to the public about what cutting-edge research really means.
And we come with the credentials to do this work. Many science-fiction writers, such as Gregory Benford, are working scientists. Many others, such as Joe Haldeman, have advanced degrees in science. Others, like me, have backgrounds in science and technology journalism. Our recent works have tackled such issues as the management of global climate change (Kim Stanley Robinson’s Forty Signs of Rain and its sequels), biological terrorism (Paolo Bacigalupi’s The Windup Girl), and the privacy of online information and China’s attempts to control its citizens’ access to the World Wide Web (my own WWW:Wake and its sequels).
Print science fiction writers often do consulting for government bodies. A group of science fiction writers called SIGMA frequently advises the Department of Homeland Security about technology issues, and Jack McDevitt and I recently were consulted by NASA about the search for intelligence in the cosmos.
At the core of science fiction is the notion of extrapolation, of asking, “If this goes on, where will it lead?” And, unlike most scientists who think in relatively short time frames—getting to the next funding deadline, or readying a product to bring to market—we think on much longer scales: not just months and years, but decades and centuries.
That said, our job is not to predict the future. Rather, it’s to suggest all the possible futures—so that society can make informed decisions about where we want to go. George Orwell’s science-fiction classic Nineteen Eighty-Four wasn’t a failure because the future it predicted failed to come to pass. Rather, it was a resounding success because it helped us prevent that future. Those wishing to get in on the ground floor of discussing where technology is leading us would do well to heed Alvin Toffler’s advice by cracking open a good science-fiction book and joining the conversation.
Like Slate on Facebook. Follow us on Twitter.