Slate has relationships with various online retailers. If you buy something through our links, Slate may earn an affiliate commission. We update links when possible, but note that deals can expire and all prices are subject to change. All prices were up to date at the time of publication.
Excerpted From The Extended Mind. Copyright © 2021 By Anne Paul. Available from HMH Books & Media. Reprinted by permission of HarperCollins Publishers. All rights reserved.
The scene from the 2002 movie Minority Report is famous because, well, it’s just so cool: Chief of Precrime John Anderton, played by Tom Cruise, stands in front of a bank of gigantic computer screens. He is reviewing evidence of a crime yet to be committed, but this is no staid intellectual exercise; the way he interacts with the information splayed before him is active, almost tactile. He reaches out with his hands to grab and move images as if they were physical objects; he turns his head to catch a scene unfolding in his peripheral vision; he takes a step forward to inspect a picture more closely. Cruise, as Anderton, physically navigates through the investigative file as he would through a three-dimensional landscape.
The movie, based on a short story by Philip K. Dick and set in the year 2054, featured technology that was not yet available in the real world—yet John Anderton’s use of the interface comes off as completely plausible, even (to him) unexceptional. David Kirby, a professor of science, technology, and society at California Polytechnic State University, maintains that this is the key to moviegoers’ suspension of disbelief. “The most successful cinematic technologies are taken for granted by the characters” in a film, he writes, “and thus, communicate to the audience that these are not extraordinary but rather everyday technologies.”
The director of Minority Report, Steven Spielberg, had an important factor working in his favor when he staged this scene. The technology employed by his lead character relied on a human capacity that could hardly be more “everyday” or “taken for granted”: the ability to move ourselves through space. For added verisimilitude, Spielberg invited computer scientists from the Massachusetts Institute of Technology to collaborate on the film’s production, encouraging them “to take on that design work as if it were an R&D effort,” says John Underkoffler, one of the researchers from MIT. And in a sense, it was: Following the release of the movie, Underkoffler says, he was approached by “countless” investors and CEOs who wanted to know “Is that real? Can we pay you to build it if it’s not real?”
Since then, scientists have succeeded at building something quite similar to the technology that Tom Cruise engaged to such dazzling effect. (John Underkoffler is now himself the CEO of Oblong Industries, developer of a Minority Report–like user interface he calls a Spatial Operating Environment.) What’s more, researchers have begun to study the cognitive effects of this technology, and they find that it makes real a promise of science fiction: It helps people to think more intelligently.
The particular tool that has become the subject of empirical investigation is the “large high-resolution display”—an oversized computer screen to which users can bring some of the same navigational capacities they would apply to a real-world landscape. Picture a bank of computer screens three and a half feet wide and nine feet long, presenting to the eye some 31.5 million pixels. (The average computer monitor has fewer than 800,000 pixels.) Robert Ball, an assistant professor of computer science at Weber State University in Utah, has run numerous studies comparing people’s performance when interacting with a display like this to their performance when consulting a conventionally proportioned screen.
The improvements generated by the use of the super-sized display are striking. Ball and his collaborators have reported that large high-resolution displays increase by more than tenfold the average speed at which basic visualization tasks are completed. On more challenging tasks, such as pattern finding, study participants improved their performance by 200 to 300 percent when using large displays. Working with the smaller screen, users resorted to less efficient and more simplistic strategies, producing fewer and more limited solutions to the problems posed by experimenters. When using a large display, they engaged in higher-order thinking, arrived at a greater number of discoveries and achieved broader, more integrative insights. Such gains are not a matter of individual differences or preferences, Ball emphasizes; everyone who engages with the larger display finds that their thinking is enhanced.
Why would this be? Large high-resolution displays allow users to deploy their “physical embodied resources,” says Ball, adding, “With small displays, much of the body’s built-in functionality is wasted.” These corporeal resources are many and rich. They include peripheral vision, or the ability to see objects and movements outside the area of the eye’s direct focus. Research by Ball and others shows that the capacity to access information through our peripheral vision enables us to gather more knowledge and insight at one time, providing us with a richer sense of context. The power to see “out of the corners of our eyes” also allows us to be more efficient at finding the information we need, and helps us to keep more of that information in mind as we think about the challenge before us. Smaller displays, meanwhile, encourage a narrower visual focus, and consequently more limited thinking. As Ball puts it, the availability of more screen pixels permits us to use more of our own “brain pixels” to understand and solve problems.
Our built-in “embodied resources” also include our spatial memory: our robust capacity to remember where things are located in space. This ability is often “wasted,” as Ball would have it, by conventional computer technology: On small displays, information is contained within windows that are, of necessity, stacked on top of one another or moved around on the screen, interfering with our ability to relate to that information in terms of where it is located. By contrast, large displays, or multiple displays, offer enough space to lay out all the data in an arrangement that persists over time, allowing us to leverage our spatial memory as we navigate through that information.
Researchers from the University of Virginia and from Carnegie Mellon University reported that study participants were able to recall 56 percent more information when it was presented to them on multiple monitors rather than on a single screen. The multiple monitor setup induced the participants to orient their own bodies toward the information they sought—rotating their torsos, turning their heads—thereby generating memory-enhancing mental tags as to the information’s spatial location. Significantly, the researchers noted, these cues were generated “without active effort.” Automatically noting place information is simply something we humans do, enriching our memories without depleting precious mental resources.
Other embodied resources engaged by large displays include proprioception, or our sense of how and where the body is moving at a given moment, and our experience of optical flow, or the continuous stream of information our eyes receive as we move about in real-life environments. Both these busy sources of input fall silent when we sit motionless before our small screens, depriving us of rich dimensions of data that could otherwise be bolstering our recall and deepening our insight.
Indeed, the use of a compact display actively drains our mental capacity. The screen’s small size means that the map we construct of our conceptual terrain has to be held inside our head rather than fully laid out on the screen itself. We must devote some portion of our limited cognitive bandwidth to maintaining that map in mind; what’s more, the mental version of our map may not stay true to the data, becoming inaccurate or distorted over time. Finally, a small screen requires us to engage in virtual navigation through information—scrolling, zooming, clicking—rather than the more intuitive physical navigation our bodies carry out so effortlessly. Robert Ball reports that as display size increases, virtual navigation activity decreases—and so does the time required to carry out a task. Large displays, he has found, require as much as 90 percent less “window management” than small monitors.
Of course, few of us are about to install a 30-square-foot screen in our home or office (although large interactive displays are becoming an ever more common sight in industry, academia, and the corporate world). But Ball notes that much less dramatic changes to the places where we work and learn can allow us to garner the benefits of physically navigating the space of ideas. The key, he says, is to turn away from choosing technology that is itself ever faster and more powerful, toward tools that make better use of our own human capacities—capacities that conventional technology often fails to leverage. Rather than investing in a lightning-quick processor, he suggests, we should spend our money on a larger monitor—or on multiple monitors, to be set up next to one another and used at the same time. The computer user who makes this choice, he writes, “will most likely be more productive because she invested in the human component of her computer system. She has more information displayed at one time on her monitor, which, in turn, enables her to take advantage of the human side of the equation.”
Large-format displays and multimonitor setups are just one way humans can hack the brain’s built-in navigational system. Research has found that all of us seem to use this system to construct mental maps, not just of physical places but of the more abstract landscape of concepts and data—the space of ideas. This repurposing of our sense of physical place to navigate through purely mental structures is reflected in the language we use every day: We say the future lies “up ahead, while the past is “behind” us; we endeavor to stay “on top of things” and not to get “out of our depth”; we “reach” for a lofty goal or “stoop” low to commit a disreputable act. These are not merely figures of speech but revealing evidence of how we habitually understand and interact with the world around us. Notes Barbara Tversky, a professor of psychology and education at Teachers College in New York: “We are far better and more experienced at spatial thinking than at abstract thinking. Abstract thought can be difficult in and of itself, but fortunately it can often be mapped onto spatial thought in one way or another. That way, spatial thinking can substitute for and scaffold abstract thought.”
Scientists have long known that the hippocampus is centrally involved in our ability to navigate through physical space. More recently, researchers have shown that this region is engaged in organizing our thoughts and memories more generally: It maps abstract spaces as well as concrete ones. In a study published in 2016, neuroscientist Branka Milivojevic, of the Donders Institute for Brain, Cognition and Behaviour in the Netherlands, scanned the brains of a group of volunteers as they watched the 1998 movie Sliding Doors. In this romantic comedy-drama, the main character, Helen—played by actress Gwyneth Paltrow—meets two different fates. In one storyline, she makes it onto a train and returns home in time to find her boyfriend in bed with another woman. In a second, parallel storyline, she misses the train and remains oblivious to her boyfriend’s infidelity. As the study participants watched the film, Milivojevic and her collaborators observed activity in their hippocampi identical to that of people who are mentally tracing a path through a physical space. Milivojevic proposes that the viewers of Sliding Doors were effectively navigating through the events of the movie, finding their way along its branching plotline and constructing a map of the cinematic territory as they went. We process our firsthand experiences in the same manner, she submits.
Such research points toward another effective spatial hack: drawing a concept map that captures the mental map we carry around in our heads. A concept map is a visual representation of facts and ideas, and of the relationships among them. One immediate benefit of drawing a concept map is that we “offload” the information it depicts, moving it out of our heads and onto a whiteboard or a sheet of paper. Keeping a thought in mind—while also doing things to and with that thought—is a cognitively taxing activity. We put part of this mental burden down when we delegate the representation of the information to physical space; in so doing, we gain more mental resources to think about that same material.
In addition to this expanded bandwidth, the act of creating a concept map generates a number of other advantages. It forces us to reflect on what we know, and to organize it into a coherent structure. As we construct the concept map, the process may reveal gaps in our understanding of which we were previously unaware. And, having gone through the process of concept mapping, we remember the material better—because we have thought deeply about its meaning.
And there is yet another benefit produced by turning a mental “map” into a stable external artifact. Once a concept map is completed, the knowledge that usually resides inside the head is made visible. By inspecting the map, we’re better able to see the big picture, and to resist becoming distracted by individual details. We can also more readily perceive how the different parts of a complex whole are related to one another. While representations in the mind and representations on the page may seem roughly equivalent, in fact they differ significantly in terms of what psychologists call their “affordances”—that is, what we’re able to do with them. “When thought overwhelms the mind,” notes Barbara Tversky, “the mind uses the body and the world.”
By Annie Murphy Paul