Vaccine allocation, age, and race
The challenge of using statistics to determine life-and-death policy
Recently there was a big and contentious debate about CDC vaccination priorities. A November 23 series of slides from the Advisory Committee on Immunization Practices — a group of medical and public health experts who advises the CDC on vaccination — advocated giving essential workers priority over elderly people in the vaccine queue. This was in spite of the fact that according to the presentation’s own calculations, giving the elderly priority would avert at least 0.5-2% more deaths:
The recommendation became a culture-war football when people noticed that on an earlier slide, ACIP had listed the fact that racial minority groups are underrepresented in the age >65 population, while overrepresented among essential workers.
In other words, some feared that ACIP — and by extension the CDC — were de-prioritizing old people because they cared less about saving White lives than about saving Black lives. (Note: I’m using the Bloomberg style convention of capitalizing both “Black” and “White”.) A terrific Twitter culture-war battle ensued. This was then complicated by Nate Silver deciding to pick a fight with the entire public health profession over some statistics thing. Everything was very fraught and very silly.
Finally, the debate was mostly quieted when ACIP came out with another set of slides on December 20 that revised its recommendation into something much more balanced — and, helpfully, more complicated, which made it harder for Twitter randos to argue about. The new guidelines will basically split the vaccine between age >75 people and more exposed subgroups of essential workers, then move on to splitting between age >65 people and less-exposed workers.
So things seemed to have turned out fairly OK, and the debate will trail off into a long series of recriminations about whether people like Nate Silver or not.
But it’s interesting to think about how demographic variables like race and age should enter into calculations of ethics and public policy. In recent decades, huge amounts of data have become available for ethicists and policymakers to use, and statistical procedures that were once rare and laborious are now easy and commonplace. This means that society is having to wrestle increasingly with statistical consequentialism — the more we know about correlations between policies and outcomes, the more complex our ethical debates become.
Obviously, there are whole fields of study devoted to this, so this blog post is hardly breaking new ground. But I wanted to use the vaccination example to point out one way statistics makes these ethical questions so difficult and fraught. It’s that race-neutral ethics can imply race-sensitive policy.
And then after that I’m gong to make a second, related point, which is to note some ways that statistics makes policymaking more difficult.
A race-neutral ethical framework
OK, so suppose we’re deciding who gets vaccinated first, with limited vaccine supplies. And suppose we adopt a simple ethical framework of race-neutral consequentialism. In other words:
We only care about outcomes — who lives and who dies. Later we can make the example more complicated by considering Quality-Adjusted Life Years, but for now let’s just assume that “death = bad, life = good” is the only outcome we care about.
We value the lives of all human beings equally. One Black life and one White life have equal value. One Black death and one White death are equally bad.
This is a really simple framework. Not everyone is going to agree with it (in fact, the original, November ACIP slides appear not to agree with it, since they didn’t recommend the death-minimizing option!). Some people are going to want to bring some notion of individual moral judgement into the equation — in other words, determining who deserves to live and who deserves to die. Other people may think we should value a Black life more than a White life, at least in regards to this pandemic, on the grounds that society usually values Black lives less, so we need to make up for that. Still other people might demand that we be not just race-neutral but also race-blind — that we refuse to even consider race when making these decisions.
This framework says: No. In this simple example, we’re only going to think about life and death, and we’re going to assume that all lives matter equally, but we’re going to use race to give us statistical information about death rates.
With this simple race-neutral framework, with no individual moral judgement, we would still take race into account when allocating the vaccine.
Age and race both predict death rates
As everyone knows, old people are at much higher risk of dying of COVID-19.
So an easy, simple vaccine allocation rule is to vaccinate old people first (after health care workers, of course). This is what Matt Yglesias suggested in a recent, widely read blog post.
OK but age isn’t the only thing we know that’s correlated with COVID death rates! It’s also pretty well known that Black people have higher death rates than White people! (Also Native Americans have very high death rates, Latinos and Pacific Islanders have slightly higher death rates than Whites, and Asians have the lowest death rate of major racial groups.)
In fact, there’s pretty clearly an interaction effect going on here between race and age — Black people die at much higher rates even though they’re statistically likely to be younger than White people. So for old people, the racial gap is even larger:
OK so if you want to save the most lives, and you know both age AND race, then you take them both into account. You do a multivariate regression, and you calculate people’s probability of death, and you give out shots to the highest-probability people first. That will mostly end up being old Black people (and old Indigenous people). Younger Asian people will come last.
So in this example, Black people end up getting higher priority than White people, even though we didn’t value Black people’s lives any more highly! Race was used for targeting, not for deciding who matters more.
Thus, we see how race-neutral ethics can imply race-sensitive policy. So there’s my first main point.
What about gender, income, pre-existing conditions, etc.?
Of course, those aren’t the only variables that affect COVID risk. Gender matters a lot too — men are more likely to die than women. This time the gap is greater for young people.
Of course pre-existing conditions matter too. Income and poverty are going to matter on top of race. And whether people have jobs that put them in contact with lots of other people, especially in indoor settings, is going to be important separately from all those other things. There are probably a lot of other factors.
So to minimize deaths, we can throw a whole host of demographic variables into a regression, calculate a risk score, and hand vaccines out in order of risk score. It’s basically how actuaries determine life insurance premiums already. The first people to get the vaccine will probably be old poor Black diabetic grandpas working in meat packing plants. (Side note: Why, as a society, would we have old poor Black diabetic grandpas working in meatpacking plants? But that is a topic for another day.) Healthy Asian 20something women working from home would probably come last.
Of course, there’s a ton this analysis leaves out. We could add calculations of spread probability, to get better estimates of how much death would be prevented by removing people from the pool of potential spreaders (assuming the vaccine actually does stop infection as well as death). Instead of just measuring death, we could calculate Quality-Adjusted Life Years, in order to weight young healthy people’s lives more than old sick people’s lives. Once we know how well the vaccine prevents transmission, we could incorporate that into the model too. And so on. This would lead to a quite complex algorithm for determining vaccine priority!
Public trust and statistical policymaking
But here we should probably ask: What if this kind of complex statistics-based policymaking erodes public trust in medical and public health institutions like the CDC and the medical system? Would that end up costing lives in the long run?
The first reason statistical policymaking might erode public trust is complexity. There’s a well-known fear of opaque algorithms determining people’s fates (Just recently an algorithm was blamed for leaving out front-line doctors at Stanford when determining vaccine priority! It later turned out to be a simple calculation from a Powerpoint slide). And there’s a well-known fear of secretive groups of experts determining life-and-death policy without the public having access to their decision-making process.
But here’s the thing: Even with a fully transparent decision-making process and a very simple transparent well-understood algorithm, people will still not understand how life and death are being determined. Normal people do not understand multivariate regression. Even trained experts have difficulty using regressions in practice!
And the more variables in this regression, the harder it is for average people to understand it on an intuitive level. It’s fairly easy to understand “Old people die more of this virus so we’re going to vaccinate old people first.” It’s fairly hard to understand “We weighted a bunch of factors using a mathematical function you may or may not understand, and used that to determine a risk score that tells you whether you can get the vaccine now or not.” And when the algorithm is hard to understand, people might lose their trust in the institutions using the algorithm.
The second reason statistical policymaking might erode public trust is politics. Race is a very inflammatory issue in America right now, to put it mildly. That means people aren’t going to understand how race factors into the allocation process, and will tend to get very very angry. Remember, our algorithm above was race-neutral — it valued White Lives exactly equally to Black lives — but it did used race as a way to calculate risk.
Lots of normal people aren’t going to understand that distinction. They’re going to look at a race-based risk targeting algorithm and say “WHY DO YOU WANT MY GRANDMA TO DIE JUST BECAUSE SHE’S WHITE??!!”. And you can say “It isn’t because we care about White people less, it’s because they’re at lower risk”, but the angry accusation will get 20,278 retweets and your very reasonable explanation will get 6 retweets. Then Nate Silver will jump in and tell you that the experts are doing the statistics all wrong, and The Discourse will go downhill from there.
Eroded public trust in the vaccine allocation process could have unforeseen negative consequences. It might cause people to go antivaxxer. It might cause them to elect insane evil leaders like Donald Trump who do enormous damage to public health institutions and policies. It might cause them to want to defund the CDC.
I don’t even know what it will do, to be honest. I’m not sure anyone does. There’s no generally accepted, solidly evidence-based theory of how statistical policymaking affects public trust; past experience can give us some clues, but can only teach us so much. There might even be no reason to worry in this case — complexity might lead people to give up and just trust the experts, and racial politics might make people value the vaccine more and thus be less skeptical of taking it. I just don’t think we know what will happen.
And that in itself points to a big difficulty and risk of statistics-based policymaking. Statistics tells us a lot about the things we can get data on, and not much about the things we can’t get data on. So we can end up basing our decisions on the former and ignoring the latter, even though the latter is no less important than before. This is sort of a form of the streetlight effect, but for policymaking — we overemphasize the things we can calculate and neglect the things we can’t.
OK Noah, so who gets the vaccine?
I don’t know. The point of this post is not to say who gets the vaccine; I am not enough of a public health expert to have a firm opinion on this (remember the name of this newsletter?). My instinct would be to do something like ACIP’s new policy — just sort of balance things between old people and essential workers. But I’m not sure.
But I do know that the increased use of statistics in policymaking is a double-edged sword. It’s a hugely powerful tool, but like all new technologies, we don’t quite understand the harms and risks. Figuring out how to make policy based on algorithms without eroding public trust and institutions is an enormous social challenge, and one we’ve only just begun to think about.
(Special thanks to UMBC School of Public Policy professor Zoë McLaren, who checked this post for me and added a dose of public health expertise!)
By the way, remember that if you like this blog, you can subscribe here! There’s a free email list and a paid subscription too!