When did the information age begin? One might point to the winter of 1943, when British engineers started using a room-sized machine dubbed “Colossus,” the world’s first electronic digital programmable computer, to break Nazi codes during the World War II. Or perhaps it was February 1946, when the U.S. Army unveiled the faster, more flexible Electronic Numerical Integrator and Computer (aka ENIAC) at the University of Pennsylvania. History buffs may push it back further, perhaps bringing up key 19th-century figures like Charles Babbage and Ada Lovelace who pioneered programmable calculating machines in Victorian England.
But we should look back even earlier, to the work of a towering but often overlooked intellect—to Gottfried Wilhelm Leibniz, the German philosopher and polymath who died 300 years ago on Nov. 14, 1716. Though you may not have heard of him, he was a man who envisioned the systems and machines that would define the digital revolution.
Something of a prodigy, Leibniz was just 8 when he started reading the books in his father’s library. (His father was a professor of moral philosophy at Leipzig University.) He quickly learned the classics, once boasting that he could recite Virgil’s Aeneid by heart. At school, he excelled in logic; by 17 he had defended his master’s thesis, and three years later he had qualified for his doctorate. Leibniz would go on to work as a historian, librarian, legal adviser, and diplomat. He wrote on biology, medicine, geology, theology, psychology, linguistics, and of course philosophy. The king of Prussia, Frederick the Great, described Leibniz as “a whole academy in himself.”
Famously, Leibniz clashed with Isaac Newton over the invention of calculus. Historians now believe that the two men discovered calculus independently, though it’s Leibniz’s elegant and compact notation system, not Newton’s clunkier version, that we use today.
Of course, there was no such thing as “computer science” in Leibniz’s day. But by developing the binary number system, a way of representing numerical information using zeroes and 1s, he became the father of all computer coding. (Computers don’t have to run by manipulating zeroes and 1s—but it’s a lot easier if they do.) Leibniz believed that machines, not people, should be crunching numbers and worked on a prototype for a device that could add, subtract, multiply, and divide. He tweaked and improved the design over many years; one of these contraptions looked like a primitive pinball game, with numbers represented by tiny spheres, rolling along grooves and going through gates that open and close. In London, his fellow scientists were so impressed with the device that they elected him to the Royal Society. He designed another machine that could do certain kinds of algebra, and yet another for cracking codes and ciphers.
Leibniz envisioned these machines would be used in accounting, administration, surveying, astronomy, the production of mathematical tables, and more. Tedious work that had kept human beings awake far into the night, working by candlelight, could now be mechanized. Unfortunately, the technology of the day didn’t allow for the precisely machined parts, such as uniform screws, that Leibniz’s devices required. (For instance, as historians later discovered, something as simple as “carrying the one” turns out to be maddeningly difficult to implement in hardware.) Despite 45 years of work and many prototypes, his calculating machine was never fully functional.
But for Leibniz, computation was just the beginning: He believed that all kinds of problems could be reduced to the manipulation of symbols and tackled just as though they were mathematical problems. He imagined a kind of alphabet of human thought, whose symbols could be manipulated according to precise, mechanical rules, the work carried out by devices. He called them “reasoning machines” and envisioned the pursuit we know today as artificial intelligence.
Once the system was perfected, he believed, humanity would have “a new kind of tool, a tool that will increase the power of the mind much more than optical lenses helped our eyes, a tool that will be as far superior to microscopes or telescopes as reason is to vision.” We would weigh arguments, he said, “just as if we had a special kind of balance.” Linguistic barriers between nations would fall, and the new universal language would usher in an era of understanding, peace, and prosperity. (Leibniz was, needless to say, an optimist—he also had ambitions to reunify the Catholic and Protestant churches.)
Leibniz, however, was right in foreseeing the extent to which we would come to see our world in terms of numbers. (Even just a century ago, who would have imagined that creating and manipulating visual images, or recording a symphony, would boil down to processing certain arrangements of zeroes and 1s?) He even worried about “data overload,” as individuals and governments struggled to process, store, and retrieve the vast amounts of data that would soon be generated.
Leibniz never became a household name, and many of his ideas, like the notion of a universal symbolic language, never bore fruit. Much of it had to be re-discovered by later thinkers, such as the 19th-century English mathematician and philosopher George Boole, who more fully developed the idea of a logical system based on binary arithmetic. (You may have run across his name before: Boolean algebra, Boolean searches, Boolean system.) But, in imagining a world in which machines could be used to supplement or supplant human computation, Leibniz’s way of thinking paved the way for the information age that blossomed 250 years after his death.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.