The ad has all the elements of an after-school public service announcement: somber adults reading scripted messages, wholesome kids urging their peers to take a looming threat seriously, and a clever slogan—“Together, we can bench concussions.” At first glance, the Protecting Athletes through Concussion Education program looks like an earnest attempt to raise awareness about the dangers of head injuries in team sports. The campaign highlights the problem and provides athletes, parents, and schools with something tangible that they can do—sign up their kids for a 20-minute test of their cognitive abilities, as a baseline measure in case they ever get conked on the head. The Dick’s Sporting Goods retail chain will donate the costs of ImPACT testing for to up to one million at-risk kids around the nation.
On closer inspection, though, the whole thing begins to fall apart. “It’s a huge scam,” says physician Robert Sallis, past president of the American College of Sports Medicine. “They’ve done incredible marketing, and they’ve managed to establish this test as the standard of care with no evidence that it has any benefit.”
ImPACT was developed by neuropsychologist Mark Lovell, the CEO of ImPACT Applications Inc., along with University of Pittsburgh neurosurgeon Joseph Maroon. Their test is taken on a computer and begins with a health history and inventory of concussion-related symptoms such as headache and mental fogginess. Then participants must remember various words and shapes, and click the mouse button in games designed to measure their reaction time.
The idea is that each student or professional athlete would take the test once before the season starts, and then again after a head injury. If he scored substantially lower on the second try, he’d be kept off the field until he could match his baseline score or otherwise get full clearance from a doctor.
There’s nothing particularly novel about the ImPACT test. Even its proponents admit that it’s essentially a fancier version of existing protocols like the Sport Concussion Assessment Tool (SCAT2), which also measures cognitive abilities, and can be downloaded for free. ImPACT costs $10 to $20 per exam, or comes in packages of, say, 500 baseline tests for $750. While Lovell acknowledges that his test is similar to SCAT2 (which he had a hand in creating) and others—they all measure short-term memory using word lists, for example—he insists that it’s more thorough and comprehensive. For instance, it doesn’t just ask the taker to remember words, but also shapes. That means participants are measured on both their verbal memory and their spatial memory, which might be affected differently by a brain injury. Tests like SCAT2 are meant for an initial sideline evaluation, he says, and not as a way of testing whether someone is ready to return after an injury. (Both SCAT2 and ImPACT come with a disclaimer stating that they should not be used as a sole determinant of back-to-play decision-making; that call should be made by a trained professional.)
ImPACT isn’t the only company marketing souped-up concussion tests. At least three other companies sell their own commercial versions, but ImPACT’s partnerships with companies such as Wells Fargo Bank and Dick’s Sporting Goods, and its advertising campaigns with former NFL players Jerome Bettis, Dan Fouts, and Doug Flutie, have helped it grab the biggest share of a growing market. Its client list includes every professional hockey team, and most of the franchises in the National Football League and in Major League Baseball. Which is not surprising, since Lovell founded the neuropsychological testing program for the NFL and co-directed the National Hockey League’s neuropsychology program from 1997 to 2007. (He still consults for the NHL.) Even the U.S. government has bought into Lovell’s system: Over the past few years, his company has been awarded more than $150,000 in contracts from the Department of Defense.
What good are these tests? ImPACT, like the freebie standard tests, promises to ensure that athletes don’t return to play before the concussion is gone. The scariest risk from sending an athlete back on the field too soon is a potentially fatal condition called second-impact syndrome, an uncontrollable swelling of the brain. But experts don’t even agree that the syndrome exists, and if it does, it’s exceedingly rare (a fact that Lovell readily acknowledges). In a paper published last year, neuropsychologist Christopher Randolph calculated that if the kind of brain swelling attributed to second-impact were preventable by such testing, it would require 18 million baseline assessments before a single case turned up.
A more common issue is post-concussion syndrome—in which the symptoms of a concussion linger for weeks or even months following a head injury. (Most concussions resolve within a few days.) In theory, returning to action too soon after a head injury can increase an athlete’s chances of developing a persistent post-concussion syndrome, although right now, that’s still just a hypothesis. But even here, tests like ImPACT aren’t superior to the trained judgment of an athletic trainer or doctor, who can check for those lingering symptoms and make sure the athlete stays on the sidelines until they’re gone.
Much of ImPACT’s allure lies in its ability to turn symptoms into a score (something SCAT2 can do for free), and to detect more subtle cognitive impairments—slight memory loss, for example—that might elude detection by a trainer or physician. But it’s not enough for a test to produce numbers. You have to know what to do with the data the test churns out, and that’s where things get confusing.
The test produces scores for five different areas— motor processing speed, reaction time, visual memory, impulse control, and verbal memory. But it’s hard to know exactly what these scores mean. Does a drop in the score for reaction time mean that an athlete’s brain is impaired, or just that she hasn’t yet had her coffee? What if an athlete improved on one measure but regressed on another?
The adolescent brain is still developing, and while differences in scores from one test to the next could represent post-concussion syndrome, they might also reflect the myriad other factors that can affect a young person’s cognitive abilities from day to day—everything from sleep to attitude to learning. Or maybe some overeager athletes are intentionally flubbing their baseline tests, so they can beat their scores and stay on the field even with minor symptoms. It may sound far-fetched that kids would try to game the test, but that’s one of Lovell’s main selling points: He says his computer program can catch athletes who try to hide their symptoms—the ones who pretend their headaches have gone away so they can get into Friday night’s game. As for the kids who try to botch their baseline, Lovell says he’s come up with a “validity scale” to catch them. “It basically looks for deviant performances. It’s our secret sauce.”
All that aside, if you’re to trust the numbers, the ImPACT test would need to produce the same scores on a given kid each time he or she takes it in an unimpaired state. That doesn’t always happen. In one independent study, 118 healthy student volunteers took a baseline ImPACT test andthen returned to retake the test twice more, 45 and 50 days later. In the follow-ups, more than one-third of the concussion-free participants showed up as false positives, which made it seem as if they really had the symptoms of a concussion and were maybe lying about the symptoms. Lovell points to the fact that this study was published in the second-tier Journal of Athletic Training, rather than a more respected neurology journal. But while it’s true that other studies have found slightly better correlations from test to test, critics say there’s still so much variability between the baseline and follow-ups that it’s virtually impossible to use them for calculating an overall probability of impairment. Two more studies, published last month examined the usefulness of ImPACT and concluded that it has very little practical value. The reliability of the test is “unacceptably low,” one warned, before saying that “the empirical evidence does not support the use of ImPACT testing for determining the time of postconcussion return to play.”
So why have hundreds of high schools, colleges and professional sports teams have adopted ImPACT testing? In a word: money. Every time a player gets seriously hurt, it creates hysteria that something needs to be done. “The response is, let’s throw some money at something, and ImPACT is there to say, we’ll take your money,” says Robert Sallis, the sports physician. ImPACT sells some more tests, and those in charge can show that they’re doing something and protect themselves from potential lawsuits. It’s not just ImPACT getting a cut. Neuropsychologists and other experts who pony up a yearly $1,500 subscription fee are christened “certified ImPACT consultants” and promised “access to ImPACT’s sports concussion business practice tool,” “extra public relations assistance with your local media,” and “excellent referral opportunities.” Neuropsychologists can charge $500 or more to interpret the test, turning a mild concussion into a $500 or $1,000 bill. And while the schools that sign up for ImPACT through the “public service” program get free testing for the first year, they’re on the hook after that.
This is more a clever marketing ploy than sound medical practice. Coaches don’t need a computerized test to prevent concussed athletes from going back on the field before they’re symptom-free, they need a sporting culture that takes concussions seriously and makes it OK to sit out a game because you’re hurt. If we really care about the dangers of concussions, we should be trying to prevent them in the first place, and that’s something ImPACT testing never addresses. Which may explain some of its appeal—it gives the illusion of doing something about concussions without the bother of changing the game.