In the opening session of the World Economic Forum’s meeting last week in Davos, founder and executive chairman Klaus Schwab said, “We must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or greater peril.”
This observation serves as the core of what he and other world leaders are terming “the Fourth Industrial Revolution,” the theme of this year’s wintry summit in Switzerland. Building on the German government’s “Industry 4.0,” the current national strategy for “smart” factories integrating physical manufacturing with the Internet of Things, Schwab and the WEF argue that the coming years—likely littered with 3-D printers and designer babies—will mark the beginning of a revolution unlike any we have ever experienced, unique for its scale, scope, and complexity. They cite the convergence of multiple sectors of technology and industry (artificial intelligence, nanotechnology, autonomous vehicles, to name a scant few) as evidence that humans are entering a new era of profound, exponentially increasing possibility and risk.
On this last count they’re right. Yet this has, in fact, been true of every other industrial revolution that has come before. One might even go so far as to say that an unprecedented era of possibility and risk is the defining feature of industrial revolutions as we retrospectively understand them. The spinning jenny (the multispindle spinning frame that launched the textile revolution of the 18th century), the steam engine, and even the discovery of fire were all massive leaps in human technological development that similarly changed the scale, scope, and complexity of our collective intellectual landscape.
Even exponential growth rates, often cited as the defining feature of this so-called fourth Industrial Revolution, are nothing special—they occur any time a system grows at a constantly fractional rate. The simple fact that things will fundamentally change at a nonlinear rate doesn’t differentiate this next progression in our technological evolution.
In fact, the phrase the fourth Industrial Revolution has been around for more than 75 years. It first came into popular use in 1940, in a document titled “America’s Last Chance” by Albert Carr, to usher in “modern communications, merely as an additional manifestation of the industrial revolution—as the beginnings of a new phase, a ‘fourth industrial revolution.’ ” He delivers a hauntingly familiar warning to the American people that their democratic way of life is at risk and suggests a technological revolution as the way forward. Since then, historians and scientists have proclaimed this “new” revolution’s commencement with the arrival of atomic energy in 1948:
With the coming of intra-atomic energy and supersonic stratosphere aviation we face an even more staggering fourth Industrial Revolution.
Ubiquitous electronics in 1955:
After World War II, we entered a fourth industrial revolution, with great advancement in electronics …
The computer age of the 1970s:
Now in the l970’s, we are well into the throes of a fourth industrial revolution, one phase of which is guided by electronic computers, and a coming phase fueled by atomic energy.
All the way to the beginnings of our modern information age in 1984:
Walt Rostow, the distinguished American economist/economic historian, describes this as the fourth industrial revolution— the information revolution.
The White House even hailed nanotechnology as the harbinger of “the next Industrial Revolution,” so the WEF is at least in good company. This phrase, then, seems to be little more than a refrain of 20th-and 21st-century innovation. Each time, the framing of “the next best thing” in technological development as a “fourth Industrial Revolution” has failed to garner any sort of economic, social, or political capital, despite continued attempts to make it fit that mold.
So why does this phrase crop up every 10 or 20 years among government and industry professionals? What makes society so desperate to fit our current behavior to outdated models of 19th-century innovation? Perhaps in the post-nuclear age, when our capacity for destruction has reached levels that frighten even heads of state, we’re desperately re-examining history for reassuring patterns and evidence that what we’re doing now is the natural outgrowth of what has come before—and more importantly, what we’ve previously survived. But then again, maybe the resurgence of the framing of the “fourth Industrial Revolution” simply marks something much less pernicious, maybe even a simple yearning for historical familiarity.
In spite of all this, the World Economic Forum spent the entire summit doing its best to make the case that this coming revolution is somehow more new, more different, and more threatening than any we have previously experienced. But its justifications are the same as they have always been and in fact are simply characteristics of technological revolutions in general. The spinning jenny was just as threatening to lower-class weavers as robotic doctors’ assistants are to future nursing staff, with perhaps one exception not highlighted in any of the discussions in Davos: This time, white-collar jobs are on the line, not just manual labor and blue-collar work. Maybe that’s what scares the WEF the most.
Emerging technologies have a profound power to transform society, for good or evil—this is well-understood, well-recognized, and a theme of 21st-century discourse in technology studies. Schwab and the WEF clearly understand this and are rightly convinced that we have the power to proactively design the future we want to live in, rather than innovate from a defensive or reactionary position. But Schwab and co. fail to acknowledge—or worse, knowingly ignore—the massive efforts of those in the communities of responsible innovation, anticipatory governance, and science and technology studies, who for decades have been grappling with the moral and ethical questions our world leaders gathered to discuss.
What lies ahead in our future technological development is clearly uncharted territory, replete with its own set of unique snares and dragons, regardless of the words we use to describe it. The coming decades of human technological innovation represent a social and political problem, not just a technological one, and demand expertise in finding social and political solutions—not just the vapid pontifications of professors and economists.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.