We’re living in a technology malaise.
Our news feeds and Twitter streams barrage us with banality: The Apple Watch ain’t all that, the Internet is full of dimwitted trolls, and the most pressing issue of the day is whether it’s moral to keep watching Game of Thrones. So I am inclined to sympathize with the New York Times’ Paul Krugman when he bemoans the incessant empty hype around technology these days. In economic terms, “the whole digital era, spanning more than four decades, is looking like a disappointment,” he writes. Disconnected venture capitalists, multibillion valuations of content companies, millionaires telling us to check our privilege—it’s all getting a bit surreal and a bit stupid. The last 20 years of Silicon Valley cheerleading feel hollow today, while the advances of the information economy don’t seem to have made a substantial dent in our deep political and economic problems. Worse, by pinning our hopes on technology, Krugman argues, we’ve distracted ourselves from pressing issues like inequality and crumbling infrastructure.
It shouldn’t, of course, take a New York Times columnist to tell us that Apple product rollouts are not the second coming. And social networks and mobile advertising can’t possibly promise a meaningfully better future when we’re staring down global warming and a widening wealth gap. But while the malaise we detect is real, it’s one of complacent consumerism, not technology. If we hope to save ourselves from disaster, technology—real technology—remains our only hope. We shouldn’t roll our eyes at it, as Krugman does. We should give it some credit.
It is nearly impossible to hear this truth amid the deafening, dulling online noise. The tech hype Krugman derides gives way to the chattering classes bemoaning Silicon Valley solutionism and uninformed pundits preaching about the “hegemonic underpinnings” of technology. Krugman is too smart to buy into anti-technological counterhype, but he remains skeptical of 3-D printing and Big Data as transformative developments. He’s right that trendy technologies suck up oxygen when we could use more debate on, say, the Trans-Pacific Partnership or climate change. I’d also add that too much ink is spilled on pseudoscientific debates between techno-utopians and dystopians pondering how many A.I.s can dance on the head of a pin before they kill us all.
Yet our fixation with tech has a very real justification, despite all the hot air it produces. It arises from an underlying sense that we need new technology. Because as it stands, humanity is seriously screwed, even if we aren’t sure exactly how. We mistake technology for a cure-all because it is a cure-something. It may not satisfy growth-hungry economists, but over the next century, technology is our only hope.
While technology did spur the huge uptick in economic growth that began with the Industrial Revolution, Krugman is right to observe that there is no guarantee technology will sustain that growth. As Thomas Piketty, among others, has shown, it appears to be leveling off. Tech will most likely not provide a “new economy” so much as prop up the old one. Yet this makes technology more important than it would be under the optimistic fantasy. In the next few decades, technology may play less of a role in fueling major productivity gains, but it will play a far greater one in preventing humongous productivity losses. And by productivity losses, I mean ecological catastrophe, biodestruction, infrastructural collapse, and mass death.
Forgive me for harping on this, but we’re screwed. That’s why chatter around the dangers of evil A.I. strikes me as worthless, because A.I. wouldn’t even make my list of top 10 threats to humanity. And it’s why Krugman’s dismissal of technology’s impact is shortsighted. While deadly atrocities in places like Somalia or Rwanda are all too easy for First Worlders to ignore, climate-driven resource shortages may well change that. Policies like carbon taxes and practices like recycling won’t prevent massive drought, flooding, tsunamis, catastrophic sea-level rise, pandemics, food and fuel shortages, and, most of all, the warfare that could arise from these crises. The mechanisms to prevent and cope with these crises can only be technological, particularly in the form of environmental engineering, biotechnology, and pervasive computing. All of these advances will pose their own dangers, but these dangers are only possibilities next to the very likely future that nature has planned with our unwitting assistance: the sea level rising a few meters, mass extinctions and biodiversity loss, and surprises like pandemics and super-volcanoes.
Yet if things are so desperate, why does technology seem so banal today, as Krugman observes? Because humanity is terrible at speculative crisis management. And our most necessary technological, political, and economic advancements only tend to come once crises hit—or after.
The 1920s and ’30s were also filled with an “Is that all there is?” malaise, as people wondered whether technological progress had been worth the trouble, given all the political instability, mindless technocracy, and environmental damage (from coal): The writings of H. G. Wells and George Orwell reveal profound disenchantment with pretty much everything in contemporary society, while scientific giants like J. B. S. Haldane and John Desmond Bernal, as well as science-fiction great Olaf Stapledon, looked romantically at communism as a possible salvation. In Vienna, “modernism” was pretty much synonymous with “critique of modernity.” It wasn’t until World War II hit that jobs multiplied and the benefits of scientific advances and technology became apparent again—even if the immediate benefits were simply more advanced and efficient ways of killing people. (Not all though: The shift from coal to oil decreased pollution. Back then, oil was clean energy.) During the Cold War, educating citizens and advancing science became an imperative in order to stop the Soviets from getting ahead, while the USSR centrally planned itself into advanced science, great pianists, mass imprisonment and execution, and economic collapse. Yet in addition to all the nukes and paranoia of that era, we also got the space program, the Internet, and abstract expressionism. And the information revolution of the ’90s was, more or less, an unintended consequence of Cold War spending. Tim Berners-Lee invented the Web at the European state-funded research organization CERN, not Steve Jobs or Bill Gates at their consumer tech companies. CERN was founded in the 1950s to do nuclear research—that is, the need to blow up the Soviets better than they could blow us up. Unfortunately, wide-scale suffering (or the imminent threat thereof) seems to correlate with human advancement (if you want to call it that).
It’s a mistake to see the market as having generated the tech boom of 1995, when it actually was a delayed side effect of corporations taking advantage of Cold War–funded infrastructure. Environmental and biological catastrophes (and the resulting wars and conflicts) will provoke different kinds of technological responses than the Cold War, and it’s not a given that the results will be quite so positive. In their interesting if flawed “future history” The Collapse of Western Civilization, Naomi Oreskes and Erik M. Conway spin out a scenario in which injecting sulfate particles into the atmosphere decreases global temperature but also catastrophically shuts down the Indian monsoon, leading to mass famine in Asia, the abandonment of the project, and the drowning of New York City. But these kinds of risks are inherent to technology. Looking back, I’m still surprised we made it 40 years without someone dropping a nuke somewhere, and I have no idea if our lucky streak will continue. But what looks like a malaise today will turn out in retrospect to have been humanity twiddling its thumbs before the disaster hit (whatever disaster it turns out to be), at which point some random research that no one currently cares about will turn out to have been crucially important. I don’t know what it will be, but it won’t be the Apple Watch, and it won’t be anything that was hawked by Silicon Valley venture capitalists.
So when technology looks like it’s going nowhere, remember that we aren’t driving the car (and neither is a robot). From the moment the Industrial Revolution triggered the massive 200-year explosion in growth, we made a Faustian bargain. From that point on, it was pretty much a given that the motor of technological progress and economic growth would lead us to where we are today. Henry Adams called this the dynamo, embodying the dominant force and technological motor of modern human history, after he had abandoned the idea of a comprehensible human direction to history, “satisfied that the sequence of men led to nothing and that the sequence of their society could lead no further.” Since growth beget more growth, only external factors would stop the technologically enabled acceleration. As Piketty pointed out in Capital in the Twenty-First Century, we are finally meeting those external factors. In that sense, Krugman is right: Technology is becoming less of a motor of growth. But it will—it must—become our chief weapon for coping with 21st-century crises.
So let’s all yawn at the Apple Watch and the other gaudy, noisy froth that clogs up our days. But technology will come to seem awfully exciting again once our asses are on the line.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.