Humans are useless at assessing probabilities. But against the odds, Dylan Evans has tracked down the handful of people who rate as geniuses on the intelligence scale he calls risk quotient. Alison George asked him what they can teach us—and if we can boost our own scores
Most people probably haven’t heard of risk intelligence. What is it?
It is the ability to estimate probabilities accurately, it’s about having the right amount of certainty to make educated guesses. That’s the simple definition. But this apparently simple skill turns out to be quite complex. It ends up being a rather deep thing about how to work on the basis of limited information and cope with an uncertain world, about knowing yourself and your limitations.
Are most of us bad at this?
Yes. The psychologists Daniel Kahneman and Amos Tversky laid the ground for a lot of what we know about judgment and decision-making. One of their findings is that we are incredibly bad at estimating probabilities. I assumed this was pretty much universal and hard, if not impossible, to overcome. So I was surprised to come across occasional islands of high risk intelligence in odd places.
Where were those pockets of genius?
I found them among horse-race handicappers, bridge players, weather forecasters, and expert gamblers. You can only be an expert gambler where there is room for skill, like blackjack, poker, or sports betting. It is hard to track them down because they shun publicity, and it was hard to get them to trust me, but eventually they did. I interviewed the blackjack team who inspired the film 21, as well as other blackjack and poker players. What they have in common is they are very disciplined and hardworking.
What’s the difference between an expert gambler and an ordinary gambler?
The expert gambler makes money and the problem gambler loses it. But there are emotional differences. Although they both gamble a lot and it appears to be compulsive, expert gamblers know when not to bet, they evaluate their opportunity each time.
There is also a big asymmetry in feelings about winning and losing. Problem gamblers get a buzz from winning, it’s like an adrenalin rush, but they don’t mind losing that much. With experts, it’s the opposite: They don’t get a huge kick out of winning, the pleasure is more cognitive. But they hate losing so much that they are constantly re-evaluating their decisions and finding out how to do better.
Does a talent for blackjack mean you make intelligent choices in the rest of your life?
There is a degree to which the things you learn by developing high risk intelligence in one area spill over to the rest of life—you see a kind of modesty, for example. A distinguishing feature of people with this kind of intelligence is that they’ve had extensive experience of learning the mistakes of being overconfident in one area, and apply that lesson generally.
Knowing your limits is key then?
Yes. It doesn’t matter if you’ve got a high level of knowledge about horses in a race: If you don’t have corresponding self-knowledge, it is no good, you won’t have high risk intelligence.
What else did you find about expert gamblers?
They’re not Rain Man geniuses, they don’t necessarily have mathematics degrees, and there is no correlation with education or IQ. But they are all comfortable with numbers, and their risk intelligence is substantially higher than average.
How do you quantify risk intelligence?
I set up an online test to measure risk quotient or RQ. It consists of 50 statements, some true, some false, and you have to estimate the likelihood of a statement being true. The average RQ is not high. There are two ways you can have a low RQ. One is by being overconfident, the other is by being underconfident. You do find people making the underconfidence mistake, but there are far fewer of them.
Your book presents a rather worrying finding—that doctors have a very low risk intelligence?
Absolutely. In fact, as they get older, they become more confident, but no more accurate, which means their risk intelligence actually declines. One study I looked at showed that when doctors estimated patients had a 90 percent chance of having pneumonia, only about 15 percent had the condition, which is a huge degree of overconfidence. Another way of putting it is that they think they know more than they do. One explanation is that doctors have to make so many different decisions about so many different things they don’t get a chance to build up a good model. Maybe if you have to make life and death decisions, you feel you have to exude confidence otherwise you’d be too damned scared to do anything.
Is the appetite for risk a very different thing from being intelligent about risks?
Yes. They are often confused. Appetite for risk is an emotional thing, while risk intelligence is a cognitive skill. You could have people with both an appetite for high risk and a high risk intelligence or people with low levels of both. A particularly dangerous combination would be high risk appetite and low risk intelligence.
What mistakes do we make in assessing risks?
The need for closure is a really interesting one. If you have a great need for closure, it means you don’t like being in a state of uncertainty—you want an answer, any answer, even if it is the wrong one. On the other extreme, there is this need to avoid closure, where you are constantly seeking more information, so you get stuck in analysis paralysis.
Can we increase our risk quotient?
Absolutely. One way is by being aware of different cognitive biases. Another is to play a personal prediction game. Bet against yourself and estimate probabilities of anything: whether your partner will get home before six, or whether it is going to rain, and keep track of them. Expert gamblers are constantly on the lookout for overconfidence, biases and so on. It is hard work, but it means they know themselves pretty well and they don’t have illusions. They know their weaknesses.
I did your RQ test, and got a high score. If that’s correct, would that manifest itself in my life?
It probably means you can judge the accuracy of information pretty well. You wouldn’t be totally taken in by a random news story. You would probably be quite good at learning how much to trust what people tell you.
People may have heard of you because of the “fruitbatgate” incident, when you were accused of sexual harassment after showing a colleague a paper about fruit bats engaging in oral sex …
It is complicated, and I don’t want to comment on the specifics of the case, but I will comment on the broader context—a worrying trend in academia towards policies that inhibit free discussion of ideas and sharing of information. The Foundation for Individual Rights in Education in the U.S. has highlighted many cases in which free speech in universities has been curtailed by oppressive policies. Universities should encourage academics and students to take risks and push back the frontier of knowledge, but they are increasingly risk-averse, and this is a terrible shame.
This article originally appeared in New Scientist.