You probably know at this point in the pandemic that if you test positive on a rapid test (and you’ve used the test correctly—you’ve got to swab pretty hard!) that the odds are overwhelming that you have COVID and are infectious. You have, no doubt, been warned that a negative result on a rapid test should be taken with a grain of salt, especially if you’ve been exposed to the virus or you have symptoms. And you might have heard two terms about tests thrown around: sensitivity and specificity, particularly if you’ve been trying to get a grip on exactly how accurate rapid tests are.
As a mathematician, I think knowing a bit more about how sensitivity and specificity are calculated can help you better understand why a positive result should be heeded—and a negative test should be read with some caution.
First we’ll look at specificity. The Centers for Disease Control and Prevention explains that having rapid tests widely available is a public good because
The high specificity and rapid BinaxNOW antigen test turnaround time facilitate earlier isolation of infectious persons. Antigen tests can be an important tool in an overall community testing strategy to reduce transmission.
What does specificity mean here? Specificity is calculated based on the number of false positive results a test will produce (and though the CDC references one brand in the quote above, the results are similar for all of them). The number of false positives that studies report seems to be essentially zero. Basically, there are never (or very, very, very rarely) any false positives. So why is a “high specificity” good? It is because it means you have very few false positives!
To calculate the specificity of a test, we need to know the “false positive rate.” To calculate the false positive rate, you put the number of false positives that have been identified in the top of a fraction, and put the number of people you tested who did not have COVID-19, including both the false positives and “true negatives,” in the bottom number: (number of false positives)/(number of false positives + number of true negatives).
The number of false positives for a rapid test is known to be essentially zero. I hope you recall that any fraction with a zero on top is zero, regardless of the size of the denominator. Thus, the false positive rate for a kind of test with no false positives is also zero.
It would certainly be more intuitive if clinical studies and articles just told us the false positive rate. Instead, that rate is used to calculate specificity. This is a percentage that comes from the false positive rate by subtracting the false positive rate from 1 and then multiplying by 100. So a test with a false positive rate of 2 out of 10, or 20 percent, would have a specificity of 80 percent. A test like a rapid COVID test with a false positive rate of essentially zero has a specificity of 100 percent. To put that a little more simply, the specificity tells you how often you can trust a positive result. The answer with rapid tests is basically 100 percent of the time!
Let’s turn now to sensitivity, and to negative results on a rapid test. (A useful mnemonic: The n in sensitivity means it’s about negative tests.) Unfortunately, a negative result on a rapid test is much less informative than a positive one. People say the current rapid tests have a (relatively) high false negative rate and are less sensitive tests.
A false negative means that the test says you don’t have a disease when you actually do. Sensitivity comes from knowing the “false negative rate” (sometimes called the “miss rate”). The false negative rate is a fraction you calculate from the number of false negatives, divided by the number of people who were known to actually be positive.
For example, suppose you test 1,000 people who are known to have the disease (through some other test), and your rapid test shows 200 people are negative. Your false negative rate for that rapid test is 200 out of 1,000, or 20 percent. Sensitivity is then calculated by subtracting this number from 1, so if you have a false negative rate of 20 percent, your sensitivity is 80 percent. That is, in this example, you can trust a negative test 80 percent of the time.
Estimates for this sensitivity of rapid tests are all over the place for previous variants of COVID, and there is good evidence that the sensitivity results reported for earlier versions of COVID don’t apply to omicron anyway. So the sensitivity of rapid tests for omicron, right now, is up in the air. While a negative test provides some reassurance that you aren’t infectious, we just don’t really know how much.
Update, Jan. 25, 2022: While most of the original studies of rapid tests (in particular those used for FDA approval) showed essentially 100 percent specificity—whether you were symptomatic or asymptomatic—the picture is becoming more complicated with real-life data, as this recently published erratum to one of the better papers on this question indicates. The good news is that if you are symptomatic and use a rapid test, specificity for most tests remains at essentially 100 percent. However, if you are asymptomatic, the specificity drops a bit, but is probably still in the very high 90s. Unfortunately, once you drop below 100 percent specificity, the prevalence of the disease affects how much you can trust a positive test, even for a variant as common as omicron. In a nutshell: Current information seems to indicate that at this time, if you are asymptomatic, a positive test has about a 1 in 10 chance of being wrong (i.e., you don’t really have COVID at all). This may be too low, but is probably a good way to bet.