“… and it tells us if there’s an increase or decrease of oxygenation in that area of the brain,” Dr. Hasan Ayaz was saying, holding up what looked like a neoprene scuba hood covered in plastic nodes and protruding wires. It reminded me of Doc Brown’s mind-reading helmet in Back to the Future. I tried to picture Ayaz with bug eyes and his hair grown out to mad-scientist wildness, but he’s a real-life scientist in Drexel University’s biomedical engineering department, with a firm handshake, a neatly trimmed dark beard, and designer eyeglasses. And, as my mind wandered to movie scientists, he’d been talking in measured tones about how the space-age scuba hood in his hands—as well as an equally sci-fi node-covered headband connected to another monitor nearby—can show him whether its wearer is engaged or disengaged in what he or she is doing; if the wearer is really listening to a lecture, say, or if her thoughts have wandered off …
With a guilty start, I realized that while listening to a professor explain new technology designed to identify and avert student boredom, I had completely tuned out. Maybe I should have been wearing the headgear, called functional near-infrared spectroscopy (fNIRS to its friends). Ayaz proposes fNIRS as a uniquely modern solution to an ancient problem. Student engagement has been a concern almost since teaching began. Aristotle used to walk around with his students while he taught, keeping their restless bodies occupied in a West Wing–style pedagogy known as the Peripatetic school. Active as it was, his style still leaned on lecture and memorization, and this (minus Aristotle’s strolling) would be the preferred method of scholarship until the 17th century, when Czech education reformer John Amos Comenius wrote that learning should be “gamesome” in order to keep students with “flickering wits” from growing bored.
This theory would grow dominant in education and beyond, with “gamification” strategies showing up everywhere from language learning apps like Duolingo to navigational tools like Waze. Comenius himself invented the illustrated textbook, the gold standard of classroom entertainment—until projectors entered the scene in the 20th century. “Books will soon be obsolete in schools,” predicted Thomas Edison in 1913. “Scholars will be instructed through the eye.”
We now live in something like Edison’s future, with technological equipment playing a huge role in the classroom, but the old concern of student engagement remains. If anything, it’s grown, as the same sorts of tools that have revolutionized learning have also provided many more rivals for student attention. The average 18-to-24-year-old, for example, sends 3,853 texts a month, which is about 128 messages each day. Even if a student’s phone is turned off in class, imagine the distraction a student suffers just knowing those messages are there, accumulating silently. Add this to the fact that we’re already inclined to seek distraction in school, with 82 percent of U.S. high school students reporting being sometimes or often bored in class, and it begins to seem like a wonder that anyone learns anything at all.
Ayaz isn’t the only researcher trying to crack the puzzle of student inattention. Take Mark Archer, CEO and co-founder of Artha, who’s developing an educational software for young learners called Little Dragon. The Little Dragon, a cartoon, can read and respond to an individual student’s emotional state in real time, using facial recognition technology that analyzes facial expressions through a webcam then stores them in a profile, developing a growing understanding of each individual. The dragon acts as a kind of host, guiding the user through the lessons, making sure he remains engaged the whole time. “Bored?” asks the oh-so-cute dragon when a child props her chin on her hands and allows the corners of her mouth to dip down. “Let’s make it more of a challenge!”
This adorable dragon only lives on screens. And yet as Archer told me about his product, which is currently under development, he assured me that they “certainly don’t think the answer lies in having kids spend inordinate amounts of time staring into screens.” (Or, presumably, developing a crippling dependence on dragons.)
Archer was addressing a common conundrum of educational technology: that it can end up contributing to the problem it was created to fix—an “endless feedback loop,” as a frustrated professor I know put it. Classroom distraction doesn’t just come from the phones in students’ pockets; it can also result from the very gadgets invented for the classroom, gamified educational tools that often aid and abet short attention spans by catering to the most restless.
And while distraction rarely helps students learn, there’s evidence that boredom does. Some studies suggest that boredom could even be necessary for certain aspects of learning, like creativity and associative thinking. Which means it might not be such a bad thing that, as Wei Xiaoyong, a professor in the computer science department at Sichuan University Chengdu, pointed out, “Students are human, they have limited energy and cannot always focus.”
Wei’s own contribution to the field of boredom-detection tech involves recording and analyzing student facial expressions. Wei originally designed the project as a fun exercise for his students, so they could practice operating facial recognition technology using their own faces. “But when I tell people about it,” he told me, “teachers are the most interested, because it could tell them when they’re doing a good job and when they can improve.”
And that does seem to be who all this educational technology appeals to most: not students but teachers, who spend their lives looking out onto rooms full of faces, some alert, some yawning, some flat-out asleep. Who might come to dread students like, well, like me, whose mind wandered to Doc Brown’s hair while listening to Dr. Ayaz. “I’m sorry,” I said. “Can you explain what this indicates again?” I pointed at the screen we were looking at, 16 black boxes with squiggling lines running across each of their faces. Each box represented a region of the brain that the headwear monitors, all in the prefrontal cortex, the area right below the skin and bone of the forehead.
“The lines show brain activity, or oxygenation,” he said. “But you can’t actually tell much from it. You have to compare it to data collected from the speaker at the same time. If the listener is engaged, their data will be synchronized.”
Something clicked, and for the first time I really felt I was finally hearing him. When you’re engaged in listening, Ayaz continued, your brain synchs itself up with the brain of the person you’re listening to, a kind of unconscious mimicry that sounded more sci-fi than the fake sci-fi I’d been imagining. This brain synching is how we pass information to each other, whether it’s a mathematical concept or a personal story or Ayaz telling me about his headwear.
“Intimate conversation” is usually understood as private, a tête-à-tête between two people about something personal, and probably emotional. But if you look at brain activity the way technology allows Ayaz to, all topics are revealed as intimate, as long as the listener is paying attention. I thought about a student’s brain linking up with a professor’s, how not just the material but also a way of processing it is passed on, and maybe even the professor’s own enthusiasm about the topic. In the lab, we watched the recorded brain activity on the screen, the rise and fall of listener oxygenation levels, with the rapt mutual investment of two sports fans. “When these levels match with the speaker’s levels,” said Ayaz, pointing at the screen. “That’s listening.”