Caltsio. Filisma. Snunkoople. Though they’re all made-up words, you probably found one funnier than the others. But why? A group of linguists from Canada and Germany explore that question in a new study with a dense title that belies its enswarm-ment with such undignified coinages as himumma, suppopp, and pachang. “Telling the World’s Least Funny Jokes: On the Quantification of Humor as Entropy” seems unpromising from a comedy standpoint, containing in its name as it does two of the English language’s unfunniest terms, “quantification” and “entropy.” But what the study loses in dissecting the delicate butterfly of humor it gains back in gremsev, sanybon, and insights into what we expect words to look like.
The study’s lead author, Chris Westbury, wanted to develop a mathematical model to predict a word’s ludic potential. Aside from context, is there a reason why Dr. Seuss’ nonsense terms (wumbus, yuzz-a-ma-tuzz) are funny and George Orwell’s (Ingsoc, yp) aren’t? Westbury and his colleagues presented more than 900 people with a total of 6,000 nonwords (“NWs”) to learn which ones made readers laugh. Using that data, they constructed a set of formulae to predict the funniness of a given string of letters.
The model built, the researchers showed 56 fluent English speakers pairs of nonsense words and asked them to rank which one tickled them more. What’s funnier, rousent or throvic? Mempise or chanywa? Bomysta or dockles? (The researchers made sure to discard from the experiment any coinages that accidentally recalled smirky slang terms: dongl, focky, clunt.) NWs with improbable letters—z’s and k’s—as well as relatively uncommon doublings—oo, rr—inspired more mirth, just as the computer predicted.
According to Westbury, his findings dovetail neatly with a theory of humor first outlined by Schopenhauer in 1818. In The World as Will and Representation (another knee-slapper), the German philosopher fingered incongruity as the wellspring of comedy. We laugh when our expectations are violated—and the more specific the thwarted expectation, the better. (For instance, When the clock is hungry, it goes back four seconds provokes grins for the way it tweaks our narrow interpretation of “four” and “seconds.” When the clock is hungry, it rides a horse is equally incongruous, but less specific in its violation, and therefore less funny.)
Schopenhauer’s insight suggests that hilarity-inducing words transgress against our very notion of what a word is. That’s why a weird-sounding lexical test-tube baby like pranomp elicits more giggling than edisted (which evokes, more than anything, what happens when writing is not carefully edisted). In information theory, the unpredictability of a signal is measured in entropy: High-entropy terms cluster together unlikely groups of letters; low-entropy terms convene the usual suspects. Hence the title of the paper: On the Quantification of Humor as Entropy.
The researchers suspect that evolution had a hand in appointing humor a reflexive response to surprise: Laughter is pleasurable, they point out; it “bribes the brain” into leaving “the beaten cognitive track.” Now Westbury and colleagues are investigating the role of probability in funny phrase construction, or why “existential llama” cracks more smiles than “angry llama.” If you can’t confidently predict the results, you’ll be sure to find them rembrob.