The NFL recently agreed to end the practice of “race norming,” which was used as part of its calculations to determine who was eligible to tap into a $1 billion settlement for former players with traumatic brain injuries. The practice meant that some Black players’ claims were denied because the NFL’s equation assumed that those players started with a lower cognitive function. As the league reverses this discriminatory practice, it’s easy to pick out a villain: the NFL.
But race norming isn’t just used by the league. It’s used throughout medicine in a number of tests and equations to help doctors tell how sick you are. A physician takes some measurements, inputs them into a computer, and then punches in some other information: your gender, your age—and, a lot of times, your race. On Monday’s episode of What Next, I talked with Darshali Vyas, a resident physician at Massachusetts General Hospital, about why what looks like a victory for football players might be part of something much bigger: a reassessment of how all of us are seen when we go to the doctor. Our conversation has been edited and condensed for clarity.
Darshali Vyas: More and more, as our technology improves and increases, there is a movement to move toward an online calculator, or an algorithm, or a risk score, that helps doctors make difficult decisions. There are some decisions that are clear-cut and some decisions that are more in a gray area. And when we’re trying to make decisions like that—it could be when to start a patient on a certain kind of medication or how to counsel a patient toward or away from a procedure or when to seek additional testing or imaging—it can be helpful to have a tool that helps us individualize a patient’s risk factors and guide decision-making. And in some ways, it’s helpful to have that because it helps doctors be more objective.
Mary Harris: You’re not just going on your gut.
Right. It can be helpful to standardize decision-making in that way, especially when there is a gray area.
But when race is a factor, how do you decide whether to plug that data in or leave it out? Especially when there isn’t a clear answer on what the patient’s race is.
There are no clear guidelines on how to answer that question. And there’s a lot of room for error in judgment to go into that decision. These tools that ask for race, typically they’ll ask for very constrained categories of race. They’ll say Black, white, Asian. The patients I take care of, their racial identities don’t fall neatly into those categories. So clinicians often will have to make an assumption based on skin color, what you think they’re identified as, or if the patient’s in front of you, you can ask them what race they identify as. But again, they’re very strict categories. And one problem that these tools don’t comment on at all is what to do if a patient is multiracial or identifies with multiple ethnic backgrounds. Do you pick one? Do you say “other”? And how does that affect what output you get from the tool?
And these tools are based on previous information, right? Like outcomes of patients who have come before. I think about race norming as this closed loop of information, where it’s both documenting a reality, but then there’s this question of whether by documenting the reality, you are then creating a reality because you’ve given this score, which now is going to affect how you treat the patient. And so you’ve sort of used a stereotype to capture someone in a way.
So you’re using a current snapshot of a disparity and using it for a predictive tool to almost continue that disparity into the future. Yeah, it becomes this warped circle of logic.
I’m a little curious what race norming looks like in this NFL situation in particular. Like, if I was a player looking for compensation for a brain injury, what kind of tests would I get and how would they be corrected for race?
Basically, to decide about the settlements, you need to assess what the damage done is to the cognition and to the brain function. And so these players undergo tests where the way they’re interpreted differs based on race. And the way they differ is the tests assume that Black players have lower cognitive function at baseline. And so to qualify for the settlement, they have to show a larger decrement in cognitive function.
The NFL has defended the practice in the past saying this was based on long-established tests and widely accepted scoring methodologies, but there’s no scientific evidence to show that Black patients have lower cognitive function, of course. And it’s at odds with all of our genetic understanding of race to begin with.
Race is often factored in to these tools the same way a biological characteristic might—like blood pressure or cholesterol. The problem is race is a social construct, not a biological condition, right?
Just because something correlates with an outcome doesn’t mean it’s causation. It’s not something about being Black that makes people more or less likely to have an outcome of interest. It’s the experience of being Black. And in some cases it’s easier for us to recognize a social factor that doesn’t end up in the model. Like, for a lot of these analyses, people will find that insurance type also correlates with the outcome of interest. Insurance type doesn’t end up in the final tool because we can recognize that insurance status is a social determinant of health.
But when race ends up with a signal, it often ends up in the final model. And that does kind of imply that we’re using it in a biological or genetic way.
It’s interesting because I’m sure the argument that someone coming up with one of these tools might use is “Well, the signal is so loud that we have to include it.” And I wonder if you might see that differently. Like, yeah, that’s a clanging bell for the racism in medicine, not some kind of indication that we need to be sorting people in this way.
When you see the signal for race, that should be a call to action, that these racial disparities are really stark and that they need to be addressed at their root cause, not that we should correct for them and just adjust our models around the disparity.
Which means, in a way, kind of accepting the disparity.
And in the worst-case scenario, perpetuating the disparity forward if we’re just correcting our tools around them.
You’ve really dug into how race norming is all over medicine. If you had to tick off the kinds of tests that are race normed, could you do it?
It started with just a few examples that stood out to me and to my classmates. But our work has shown that it’s ubiquitous across all fields of medicine. It’s surgery, it’s obstetrics, it’s general medicine. It’s a really common practice throughout medicine.
You first got interested in how race affects the health care patients receive back in medical school, when you learned that genetic variation is greater within racial groups than between them. But when did you see race used to determine what treatment patients should receive?
When I was on my obstetrics rotation, there was an example of race correction right in front of me through the vaginal birth after caesarean section tool, or the VBAC tool, that also corrects by race.
This VBAC tool is another one of those calculators for doctors. For women who have given birth once by C-section but want to try for a vaginal birth next time–a VBAC—this tool lets you plug in all kinds of information, and then you get a score that tells you how successful a vaginal birth is likely to be. The thing is telling the tool you were Black or Hispanic lowered your score.
Anecdotally, we heard from practitioners who would use the tool and have a cutoff in their mind. Like, if this calculator gives me a percentage that’s less than 50 percent, I’m not going to offer a VBAC.
And that could mean a more dangerous birth for a Black or Hispanic mom. A successful VBAC has far fewer complications than a repeat C-section, but the tool wasn’t able to factor that in.
The equity concern there is that it may be directing clinicians to steer women of color toward repeat caesarean sections.
I actually remember using the tool in preparing to see a patient with one of the obstetricians I was working with. And we pulled it up before we went in to see the patient and entered the patient’s characteristics into the tool. And then that day at our noon conference teaching session, someone had put up the equation for a VBAC on the screen in front of the whole room, and it had these subtraction factors for African American race and Hispanic race. It was just projected onto the screens, like, oh, this is another example of the same logic.
What was the logic, and why didn’t it make sense?
Basically, there was a group of researchers who wanted to create a tool to help clinicians decide who’s a good candidate for a VBAC. They looked at huge data sets and found a bunch of factors that correlated with having a successful VBAC. A lot of those factors ended up in the tool, like BMI, prior labor history—things that have a clear, mechanistic connection to a vaginal birth. They also found, interestingly, other factors that they also saw correlated that they did not include in the tool. They found marital status correlated with successful vaginal birth. They found insurance type correlated, and they found race correlated. They didn’t end up using marital status or insurance type, but they did include race. And to me, that points to our ability to identify some factors as socially mediated, but we can’t make that connection to race for some reason. We assume race is still biologically relevant.
So you and a couple colleagues wrote a paper urging your peers to reconsider the use of race in this tool. And you’ve seen some progress recently, right?
Just a couple of weeks ago, actually, that feedback calculator online is officially changed to a version of the same tool that does not use race. And what’s really interesting and exciting is that the same group that validated the original VBAC calculator revalidated the tool removing race.
Why is that meaningful for you?
The VBAC calculator has officially become the first instance of race correction in a clinical algorithm that’s been systematically reconsidered, revalidated, and abandoned with an explicit concern for equity. But also, it’s a powerful demonstration that equity work can look like: this willingness to respond and incorporate critique and to reconsider old practices. The same developers who made the first calculator have now made a new one without race and ethnicity that they also feel is a confident predictor of risk for these women. And so the development of this new model exemplifies that, yes, our clinical tools can still be scientifically rigorous and clinically useful without race correcting. And it’s powerful to see a group rethink a decision they made years and years ago, and respond to an equity concern that was raised, and do it in a scientifically rigorous way.
Something that struck me is that even though the NFL is not using these race-corrected scores for cognitive function, doctors can still use those scores, and that means if you’re a patient, do you even know if you’re being race-normed at the moment?
In general, patients often don’t know when they’re being race-normed. Some of the tools are ones that maybe a doctor will do in front of the patient. But other calculations happen at the lab. So there are a lot of examples of race norming that patients wouldn’t be aware of. And even the VBAC calculator, sometimes clinicians will maybe pull up the calculator and do it with the patient in front of them, but often it’s done before the visit even starts. And so there is an element of this that’s patient advocacy and empowering patients to ask about scores that are being used to help guide decisions about their care.
How do you deal with that as a physician? If you have a patient in front of you and you’re clicking in their electronic medical record and trying to figure out their risk of whatever, do you find yourself making decisions in the moment, like I’m going to leave race off of this one or maybe I’ll put race in this one because I do think it’s important?
It’s really tough. What makes it tough is that the decisions that we’re making about whether to include race or not in a tool are also based on these faulty ideas about how to identify race to begin with. Anecdotally, we hear physicians do all sorts of things to try to make this decision more fair until these tools are revised. And that can mean entering a patient in a tool as white, or not selecting race and then also selecting race and showing the range of values that that means. I think what it often ends up looking like is talking about how race is being used in the tool with the patient directly, and so what it can do is open up a conversation around talking about race correction with the patient themself and having the discussion around what are your actual risk factors for disease. Forget about this category the tool’s making me assign, but let’s just talk to the patient in front of us.
Subscribe to What Next on Apple Podcasts
Get more news from Mary Harris every weekday.