The 130-year timeline of telephone innovation describes a relatively steady rise as the technology under the surface was continuously improved, with a handful of spikes for inventions such as the rotary dial, touch pad dialing, the fax machine, and, of course, 1959’s Princess phone. The changes were predictably predictable, the future rising in a fairly smooth incline.
But the timeline of innovation for the defining technology of our new age is barely a line at all: The Internet happens, and all hell breaks loose. The future no longer works the way we thought it did. The spikes become not just continual but frequently simultaneous and radically unpredictable. You couldn’t have foreseen Twitter, and if you had, you probably would have dismissed it as a dumb idea. I would have.
And even if you had had the foresight to envision Twitter’s success, you could not have predicted the hashtag. Even Twitter didn’t predict that. Hashtags were invented by a user, Chris Messina, who in 2007 casually suggested that attendees at the Bar Camp conference mark their tweets with #barcamp. Without hashtags, such as #ferguson, that enable users to view a stream of tweets on a topic, Twitter would not have become an important medium for news.
The telephone’s line of progress was a low incline with very few spikes because the telephone company was the only one allowed to innovate with telephony. The Internet’s progress is shaped like the lines coming out of a cartoon explosion because anyone can innovate, and then iterate on other people’s innovations. With sites like GitHub, which enables developers to easily build on the work of others, and Stack Overflow, where developers help one another over hurdles of every height, we are getting cascades of innovation.
If the fundamental purpose of a telephone is to allow people to talk, the fundamental purpose of the Internet is to allow people to innovate alone and together. That is, the purpose of the Net is to confound predictions. And it has been doing an excellent job of it.
* * *
How we predict tells us much about how we think the universe works and how the future unfolds.
Newton gave us our basic model for how prediction works: express the immutable laws of the universe in formulas, plug in the data, and do the math. With the same formula you can predict how fast a coin will fall when dropped from a leaning tower and when the moon will next cross in front of the sun.
For example, Pierre-Simon Laplace, known as “the Newton of France,” in 1814 drew the appropriate conclusion from Newtonian prediction, albeit one that Newton himself resisted for theological reasons: If you knew the exact state of the entire universe at any one instant, you could predict everything that would ever happen, and everything that ever had. That’s a consequence of living in a clockwork universe in which every event has a determinate cause, and those causes obey eternal, knowable laws.
That’s fine in theory. In practice, since the cause of each thing is actually every other thing, it gets complicated quickly. For example, when Edmond Halley approached Newton for help in figuring out how the gravitational fields of Jupiter and Saturn would affect the path of his eponymous comet, the complexity of calculating the mutual effects of those three moving bodies was too much for Newton. It took three French aristocrats a full summer of filling in a table line by line to get the calculation for the next appearance of Halley’s comet right to within a month. (Newton himself denied the clockwork model of the universe, insisting that it had inherent instabilities that required divine intercession to be set right.)
Now we have learned that if we want to precisely predict the path of a body through the heavens, we have to emulate those French aristocrats: We create computer simulations that reapply Newton’s law at fine-grained intervals, adjusting the position and gravitational pull of each relevant body at each step. It clearly works—NASA got a probe to Pluto just 72 seconds early after a 9½-year trip—but it means that if we want accuracy, we can’t use formulas to skip ahead to a final result.
Even with this technique, while we could with great precision predict exactly when on March 9, 2016, there will be a solar eclipse, we’re clueless about whether there will be clouds between us and the cosmic spectacle. Other things we can’t predict about that day: Will we get a flat on the drive to a vantage point? Will we see a wild rabbit on the way or order a meal at a drive-thru that will spill on our lap? Will there be a new album by Beck to listen to, and if so, will we like it? (The chances are high that if there is, I will.) In fact, we can’t predict whether a coin we drop from a nearby leaning tower will be swallowed on its way down by a drunk pigeon. Everything is predictable, except most everything in practice.
We came up with a brilliant way to get over this gap between the theoretically complete predictability of the universe and our inability to apply those laws except at gross levels of detail: Most of us just ignored it. We went ahead thinking of the universe as an exquisite clockwork, and just accepted that most of our experience is beyond our power to usefully predict, especially as the bodies get smaller and the time scales get longer.
We were abetted in this by the crudeness of our devices. We lacked the equipment to handle the masses of factors that might affect an outcome, so we shrugged and moved on.
But now, seven decades into the Age of Computers, storage is incredibly cheap and processing power is awe-inspiring. We have a global network that lets us harvest streams of data from a web of sensors spread out on, under, and over Earth. Most of us now carry in our pockets sensing devices that continuously emit torrents of data about where we are, what we’re doing, and, increasingly, how our hearts are reacting. We sometimes even use those devices to make phone calls.
Big Data processes these networked floods of data, looking for correlations and trends. Some of those correlations are obvious and predicted by Newtonian laws: Tidal data correlates closely with the Earth’s distance from the moon. Some are borne out by practice although we don’t understand why, and often we don’t care. For example, marketers routinely engage in A/B testing that can discover whether there is even a single-digit percentage increase in purchases if an ad is placed on the right or left side of a Web page or if the order of words is changed in a teaser on the outside of an envelope. Some correlations are tight but just have to be coincidental: Since 1992, the yearly number of people who fall into pools and drown correlates to the number of films Nicolas Cage was in that year. (The “Spurious Correlations” site collects these gems.)
Perhaps most important, just as the invention of the microscope enabled us to study new realms, Big Data makes it feasible to study everything from banking to brains to genes to the spread of infectious diseases—and infectious ideas—in ways unimaginable before computers became so capacious.
Even human behavior looks far more predictable now that we can gather enough data about enough of us. One international research paper describes how to combine demographic information with anonymized location data from mobile phones to predict a city’s next high-crime hot spot. Researchers from Harvard University and the University of California–San Diego found that obesity spreads through real-world social networks the way contagious diseases do.
The discovery of such patterns wear away at the traditional idea that we are a wild and crazy species besotted with free will, and thus exempt from the iron determination of the clockwork universe.
And yet, like many other correlations discovered by Big Data, data can show us a behavioral correlation without telling us why there is one. For example, the researchers behind the obesity-as-a-virus study have some reasonable hypotheses to explain why fatness seems to spread even to people one and two links removed from the “carrier”—maybe having fat friends acculturates you to fatness—but these are not explanations that, like Newton’s, rise to the level of laws of the universe.
Another example: The Internet has enabled us to develop “prediction markets,” where strangers can place bets on outcomes. The odds that develop can accurately predict who will get a political nomination or the gross of a movie like Back to the Future, all without tendering a theory of why things turn out that way.
This phenomenon of knowing without understanding shows up in other computer-enabled areas. Complexity science has recognized this for decades. And the relatively new field of systems biology has discovered that when a chemical traverses a cell wall, it can unleash a cascade of reactions so complex that the human brain simply can’t comprehend them all. It takes a computer. That means that we can predict what will happen by running a computer, even if our brains can’t understand what’s going on. And if the computer is running a neural network, we may not be able to work backward through the program’s steps to understand how it came up with its conclusions.
This is the inverse of the Newtonian model of prediction. It also is perhaps leading us to a change in how we think the future works, from a clockwork to a big bang of confetti: an explosion governed by laws but with so many pieces so complexly interrelated that predicting, say, the effect of a cue ball requires actively ignoring the minute electrical charge of each fiber of the pool table’s nap not to mention the gravitational pull of Jupiter and Saturn.
* * *
We are stepping into a future that is new not just in what it contains but in our picture of how it works. The future seems less like the product of a clockwork’s relentless ticking than the result of uncountable tiny pieces, each simultaneously affecting every other in ways that cannot be fully understood afterward, much less predicted beforehand. Plus, some of those small pieces are on the Internet actively inventing new futures together.
For example, the fact that our computers are now networked means we have entered an era of continuous prediction, in which feedback loops enable Big Data and simulations to constantly refine their results, so that as the night of the eclipse approaches we can more and more accurately predict the local cloud cover.
The old laws of nature gave us a picture of a relatively stable future, where regularity is the norm and predictability is expected. But now we can see the vastness of data that we used to have to ignore is not just noise or the source of rounding errors. We can see the power and beauty of humans iterating on one another’s work. We have evidence in front of our eyes every day of just how contingent the future is on small changes and tentative ideas. The truth of the future is now visibly beyond our ken.
It is now more evident than ever that many of the most interesting questions about the future can only be known by living through it.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.