Record label EMI and Apple announced Monday that iTunes will soon offer premium music files, which come without copy protection and have a bitrate of 256 kbps instead of the usual 128 kbps. The luxury tracks will cost 30 cents more than the standard downloads. Will people be able to hear the difference?
Probably not. Studies (PDF) have found that as long as you’re using high-quality encoding software, music compressed to a bitrate of 128 kbps or more is “transparent”—in other words, most listeners can’t distinguish it from CD quality. (The bitrate measures how much digital information gets transmitted every second. CDs operate at 1,411 kbps, more than 10 times the rate of MP3s.) But a listener’s ability to distinguish sound quality depends on many factors, like age, hearing ability, and attentiveness, not to mention the style of music and where one listens to it. For example, music with delicate timbres—a string quartet, say—might sound noticeably choppy at lower bitrates, whereas compressing an AC/DC song might not be so bad. Similarly, you’re not going to hear the difference in a car, where tonal quality is already murky, but you might if you’re wearing your noise-canceling headphones. The sound quality of a file at any bitrate also depends on the compression program or “codec” used to create it; some work better than others. All these variations make it difficult to say for certain whether 256 kbps will sound noticeably better than 192, or 160, or 128 kbps. (To compare different bitrates for yourself, click here or here.)
In any case, doubling the bitrate from 128 kbps to 256 kbps won’t make music sound twice as good, because the smaller file already has the most important information. Codecs like MP3 and iTunes’ AAC chop up music from a CD into little time frames and, for each one, determine which frequencies to keep and which to discard. As a result, about 90 percent of an audio CD’s original data gets thrown away in the MP3 compression process. (This type of compression is called “lossy,” as opposed to “lossless.” You lose data during “lossy” compression, whereas “lossless”—think ZIP files—allows you to reassemble the whole thing later.) Listeners don’t need all the data on a CD, since much of it is imperceptible to the human ear. Sound compression takes advantage of this fact by removing all that extra information. For starters, codecs throw out frequencies outside the range of human hearing—roughly 20Hz to 20,000 Hz. But that only accounts for a small amount of savings. To save even more space, the codecs also scrap frequencies that would be audible on their own but become virtually imperceptible in the presence of other sounds, like a booming bass. A well-designed codec will only get rid of stuff you wouldn’t notice in the original record; that’s why the codec you use to compress a file can be more important than its bitrate.
But all else being equal, a song’s bitrate provides a reasonable indicator for sound quality. In general, the more bits, the better. That said, each extra bit you add when expanding a compressed file will be less essential than the last. (Going from 64 kbps to 128 kbps adds more important data than going from 128 kbps to 192 kbps.) So, as you compare higher and higher bitrates, sound quality becomes harder to distinguish—the musical equivalent of diminishing returns.
Got a question about today’s news? Ask the Explainer.
Explainer thanks Peter Cariani of Harvard Medical School, William Hartmann of Michigan State University, and Adrian Houtsma of the U.S. Army Aeromedical Research Laboratory.