Just as research on blindness may lead to night-vision gene therapy or astonishing eye implants, so research on deafness may be the steppingstone to supernormal hearing.
For those who have forgotten their anatomy, here’s a quick refresher on how the ear works. (If you haven’t forgotten, skip this paragraph.) Sound arrives in the pinna. This is the visible part of the ear—the dried apricot on the side of the head. Sound waves travel the 1-inch length of the ear canal and stimulate the tympanic membrane (“eardrum”). The vibrations of the eardrum are passed on to the three bones of the middle ear, which amplify the sound and send it into the inner ear, a snail-shaped tube—the “cochlea”—filled with liquid. In the cochlea, the sound becomes a fluid wave, stimulating 7,000 “hair cells” that line the cochlear walls. The hair cells transform the wave into electrochemical signals. These signals fire the nerves that travel to the brain. (What frequency hair cells receive depends on where they are located in the cochlea.) The hair cells are the ear’s star players, the organ’s most delicate, precise, and important tools. The failure or destruction of hair cells is the leading cause of deafness.
The Background Implants hold the most promise for enhancing hearing. The best implants today are relatively rudimentary. “Cochlear implants” are surgically fitted into the cochlea of deaf people—usually children—whose hair cells don’t work. The implants, which essentially replace the hair cells, receive an audio feed from a microphone outside the ear. A signal processor translates this feed into electrical pulses that fire the nerves attached to the cochlea. The brain interprets the nerve transmissions as sound. Today’s best implants can divide the signal into 21 “channels.” By contrast, each of the 7,000 hair cells in a functioning ear is, effectively, its own channel; so, cochlear implants deliver only a fraction of the aural information that the ear normally receives. With years of training, implantees learn to understand speech, but the House Ear Institute’s Bob Shannon, the world authority on implants, says it would take about 100 channels to make that speech sound normal. Miniaturization and better technology will certainly allow that to happen.
What if we use the implant technology on undamaged ears? People with normal hearing could wear implants—or in a much less intrusive procedure, removable amplifiers in the middle ear—that would receive signals from microphones outside the ear.
There’s no limit to what microphones could feed into the ear. Wearing a directional microphone would enable you to eavesdrop on conversations across a room or behind you. There are also microphones that enhance the “cocktail party effect”—the phenomenon that allows you to tune out loud chatter in order to hear the person talking to you. Such a mike would amplify a conversation right next to you but wash out all the other ambient noise. Using a combination of mikes would permit you to eavesdrop at a distance and then focus in on up-close chatter, with the flick of a switch.
At the distant end of this road lies the development of human sonar. Dolphins and bats are echolocators: They emit ultra-high-frequency sounds and use the echoes to determine the location of objects. Theoretically, speculates University of Wisconsin psychology professor Fred Wightman, we could make echolocating implants for ourselves. We would wear a machine that emitted ultra-high-frequency pings, then strap on microphones programmed to hear the ultra-high-frequency echoes. Those signals would be delivered to the implant, translated into sound, and fed to the brain. With enough training—you’d probably start from infancy—children might be able to make sense of the signals: They could have their own form of sonar, useful for night travel.
It takes young children years to understand speech from a cochlear implant. Making sense of something as baroque as echolocation could be impossible.
The benefits of echolocation are so obscure that I can’t imagine anyone would want it. As for less exotic implants: The operation to install today’s cochlear implants destroys all residual hearing. Few people with normal hearing would choose such alarming surgery for such a marginal benefit. Fortunately, there’s a less intrusive, temporary way to perform the same tricks: Do what spies already do, and wear a removable earpiece fed by a directional mike.
Cochlear implants improve every year, as do microphones. There are already hearing aids that allow wearers to choose long-distance or short-distance listening. In a decade, there will be implants that allow different kinds of directional listening. As for sonar or something like it: It will be decades, assuming anyone is interested.
The Perfect Pinna
The Background The human ear is bad at only one task that matters: We have a hard time telling the difference between sounds that are behind us and sounds that are in front of us. A noise from the front may sound like it’s from the rear and vice versa.
It turns out that our ability to locate these kinds of sounds is largely determined by the shape of the pinna.
According to Wisconsin’s Wightman, the more ridges and folds in the pinna, especially in the upper part, the better we locate sound behind us. Sound waves from the rear bounce around the pinna; the brain uses those bounces to triangulate position. The more convoluted the ear, the more bounces; the more bounces, the better the brain processes the noise. (Wightman describes a visit to his lab by a famous acoustician. The acoustician pointed at a patient’s simple pinna and predicted she would have lousy directional hearing. She did.)
There may be a perfect pinna, a French-braided ear, and with more investigation, researchers might be able to figure out exactly what it looks like. Wightman believes doctors might someday be able to surgically modify pinnas or construct pinna prosthetics that would vastly improve directional hearing. No one has yet tried to make perfect pinnas, but prosthetics have been used successfully on people with damaged pinnas, and it would probably not be very difficult to design an ideal one. Once it was attached, you would need a few weeks to adjust, but when you did, you would have ears in the back of your head, so to speak. Soon, Wightman says, only half-joking, “We could have centers for cosmetic pinnology.”
There is no great technical obstacle to such adjustment, but is it worth the effort? Very few of us regularly need to hear what is going on behind us. Unless you’re a teacher who must know who is throwing spitballs, you’re unlikely to crave a twistier pinna. Few people would subject themselves to the hassle of surgery for such a meager benefit.
Soon, if anyone wants it.
The military is fascinated by the prospect that soldiers could learn to process more information more quickly. Can GIs, in essence, learn to hear faster? Information about this is hard to come by—none of the military scientists I phoned returned my calls—but according to civilian researchers I interviewed, the Air Force is studying it.
According to one civilian scientist who has spoken to military colleagues, the Air Force has experimented on pilots and air traffic controllers to determine whether they could work faster if they heard differently. In one experiment, as it was explained to me, the Air Force feeds information about right-side events into the right ear and information about left-side events into the left ear. That is, an air traffic controller might receive updates about planes to the east in his right headphone and planes to the west in his left headphone. The broadcast might even be precise enough that southern planes would sound like they were behind him. So far, the civilian scientist claims, the military trials have been very successful: The “spatial attribute” lets participants process information more accurately and quickly. In essence, they hear faster.
This enhancement has a few specialized uses. Spatial information is the only application I could think of, and even that’s a narrow market. Just as most people don’t need special pinnas, most of us don’t need to hear vast amounts of space-sensitive information in a very short time. It would be reassuring if air traffic controllers could, though.
Research is current. The military may well be using the technology already.
The reason these hearing enhancements seem so marginal is that the ear already does its job astonishingly well—so well that it’s hard to imagine how to improve it. The normal human ear catches a gigantic range of frequencies. The frequencies that we do miss—at the very high and very low ends of the spectrum—don’t contain much of interest anyway (unless you understand Whale). Our ears are already so acute that increasing their sensitivity may be impractical. If we picked up sound any better, we might be distracted by the sound of fluid moving within our inner ear.
So, we may be stuck with the ear we’ve got: With its dangling fleshy flaps, its dirt-collecting whorls and twists and sticky neon wax, it may look like a mistake. No one ever called the ear the window to the soul. But we’re not going to do much better.