The internet may be the work of human hands, but human minds no longer guide us through it. Think of your Spotify Discover Weekly selections, which reorganize playlists from other users to propose music that you might appreciate. The building blocks are idiosyncratically personal, but machines do the real work, selecting tracks for us according to their own mathematical calculations.
As our guides have turned algorithmic, it has grown all the more difficult to shrug off their own peculiarities. I can think of no better demonstration of this fact than the infoboxes that crop up beside many Google searches. Sometimes those results are helpful: Search for a new movie, for example, and you’ll get excerpts of reviews, a trailer, box-office data, and more, all conveniently packaged for quick consumption. One could criticize Google for pulling users away from the sites that produce such information—thereby siphoning off their audiences—but it’s clear enough why the search giant frames things this way.
Then there’s another kind of infobox, a kind that no one asked for and that none of us need. By way of example, here’s one that came up when my colleague April Glaser searched for the word eating:
I don’t know about you, but I think I understand less about eating after reading the sentence, “Eating is the ingestion of food, typically to provide a heterotrophic organism with energy and to allow for growth.” This dense verbiage is drawn, of course, from Wikipedia, which means someone thought it was important enough to write down. But in its original encyclopedic context, that language is merely a précis that leads us to less familiar topics: the biological mechanics of hunger or the prevalence of disordered behaviors, for example. Isolated in the infobox, however, this short sentence alienates us from a concept so essential to our existence that it should be intuitive. All the odder, then, to find it paired with mostly comical photos—drawn from image search results—of people stuffing human food into their corporeal mouth holes.
It’s also not the only one of its kind. Here, for example, is the infobox that comes up when you search for friendship, in which another clinical definition sits beneath some Facebook-ready inspirational quote memes:
And what if you wanted a capsule description of breathing? I’m not sure why you would, but Google’s got you covered there too:
Who are these infoboxes for, exactly? Probably not you. Certainly not me. Indeed, the only indication that there’s any editorial meddling here is a degree of prudishness that creeps in now and then. A search for sex, for example, will redirect you to a “sexual intercourse” infobox that is tellingly free of images—presumably because the top results wouldn’t always be safe for work:
Even if humans do meddle now and then, it’s unlikely that anyone decided to create these infoboxes in the first place. They’re just an effect of the way Google’s underlying algorithms have been instructed about whether and when to excerpt information, algorithms that are presumably broad enough to generate some unnecessary results to ensure that they won’t miss anything important. Still, I have a half-serious theory about what’s going on here, one that requires a brief detour through the architecture of the web itself.
In recent decades, internet developers have shifted toward a model known as the “semantic web,” a system of standards that, as a genuinely helpful infobox tells us, “promote common data formats and exchange protocols.” In effect, this means that many websites can easily and autonomously pull data from one another. It’s an approach that helps Google’s Knowledge Graph system—the tool that organizes infoboxes—to scrape information from Wikipedia, Box Office Mojo, and other websites. That largely hands-off approach has led to controversy in the past, as it famously did when Google contentiously described Jerusalem as the capital of Israel. When such problems crop up, though, it’s hard to fault individual humans, as the work of collection and presentation is effectively managed by robots.
But if robots are building them, robots may also be their final audience. You and I may not need to know the definitions of eating or friendship, but we’re not always going to be around to look things up or explain the way they are. Someday—perhaps someday soon—we humans may not be around anymore at all.
When that day comes, the disembodied intelligences that will, perhaps, survive us will surely have questions: What was eating? they will wonder. What was friendship? they will ask. And when they do, they will find the answers waiting for them in easily consumable form, prepared by their own digital ancestors and primed by our own fatal drive for ease and convenience. This data is the food they will eat, the air they will breathe. What we will leave behind us is nothing short of our own archaeological record, an anthropology of the body for those who have only minds. How fitting that we played such a small role in creating it.
One more thing
You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.
Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help. If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.Join Slate Plus