Why politically guided science is bad

Research should not be an effort to reach one's desired conclusions.

The other day, a paper was published in the American Economic Review about incarceration’s effect on children. It caused quite a stir, because it concluded that kids can sometimes benefit in certain ways from having their parents locked up:

Every year, millions of Americans experience the incarceration of a family member. Using 30 years of administrative data from Ohio and exploiting differing incarceration propensities of randomly assigned judges, this paper provides the first quasi-experimental estimates of the effects of parental and sibling incarceration in the US. Parental incarceration has beneficial effects on some important outcomes for children, reducing their likelihood of incarceration by 4.9 percentage points and improving their adult neighborhood quality. While estimates on academic performance and teen parenthood are imprecise, we reject large positive or negative effects. Sibling incarceration leads to similar reductions in criminal activity.

There was a torrent of negative reactions to the paper. That’s understandable. You have to be pretty ghoulish to actually like incarceration, and finding out that it can have some beneficial effects for the very children and siblings of the people who get locked up would place us on the horns of a dilemma. Uncomfortably, this is not the first paper to find some sort of effect like this. Here’s another from 2020, which uses a similar methodology but looks at education instead.

Some defended the paper, but very many people were upset about it. Negative reactions from various academics on Twitter ranged from “yikes” to barf emojis to allegations that its publication represented a breach of ethics. Defenders responded that these negative reactions were merely cases of people encountering inconvenient facts.

Those facts — and we do not yet know if they’re facts — would certainly be inconvenient, if you don’t like mass incarceration (and I definitely do not like it!). The U.S. prison system is a human rights nightmare; it would be disgusting to think that casting more people into the maw of that nightmare could be good for their children and siblings. And on top of that, these papers imply that family is sometimes a bad thing — that parents can be such toxic people that throwing them into a dungeon actually makes life better for their kids! That disturbing idea cuts against our deep-seated family values.

Nevertheless, I think calls for the suppression of findings like this are wrong. (And saying that papers like this should not be published, or should have to clear greater-than-usual hurdles for publication, is definitely a call for suppression.) In fact, this reaction is part of what I see as a growing movement in recent years to make scientific inquiry more governed by political ideology. And I think that’s a very bad idea. Scientists can’t ever be fully free of biases, but being less political and more devoted to seeking the facts is a worthy goal that should not be abandoned.

Let me try to explain why.


Science is about giving humanity power

Some people frame the question of politically guided science as a conflict between science and ideology. But in fact, that’s not really what it is, because the idea that science should be devoted to finding the fact is itself an ideology. It’s an ideological belief that humanity is better off knowing the facts than not knowing them.

That’s a deeply humanistic ideology. Knowledge is power, and the idea that human society always deserves more power — that in some general sense, we’ll eventually do the right thing with the knowledge science gives us — is an article of faith. It’s easy to find cases where reality tests that faith — the most obvious and famous example being the building of the atom bomb. Like many of the people who built it, I still wonder whether it was right to give humankind the power to wipe itself out; my faith in the goodness of science is not absolute.

But anyway, I think it’s probably good, on balance, for humans to know more true things, instead of being tricked into thinking false things (to the extent that there are “true” and “false” things in the Universe, epistemological disclaimer blah blah blah). That includes knowing about the true effects of incarceration. Armed with that knowledge, we can make better policy.

“Better” for whom? Some people assume that research showing benefits of incarceration is inherently anti-Black. That’s understandable, because Black people are disproportionately incarcerated, and thus have the most to lose if policymakers were to decide that we need to lock more people up. Some Twitter commentators went so far as to allege that the AER paper wouldn’t have been published if the journal’s editorial board were more diverse — that the very publication of this result is a manifestation of structural racism.

But here’s the thing — the kids and siblings of incarcerated Black people are also Black! If incarceration (conditional on being arrested and going to trial) is good for kids, that means that we’re looking at a tradeoff between the welfare of some Black people vs. the welfare of other Black people. If this research is right, and you suppress it in order to try to help Black people, you might just be hurting different Black people.

Now, to reiterate, I don’t want such tradeoff to exist. I don’t like the prison system, and I like family. But if that tradeoff does exist — and it’s still far from settled science — I think we are better off knowing it.

In fact, though, the usefulness of research like this goes beyond simply recalibrating the tradeoff between incarceration and decarceration. It also helps point us in the direction of potential new and better policies than simply “lock up more people” or “lock up fewer people”. John Pfaff, a dedicated opponent of mass incarceration who wrote one of the most famous books on the topic, had a good thread in which he explained the practical value of research like this. Excerpts:

In other words, knowledge is useful in large part because it leads to more knowledge. Suppressing a piece of knowledge also suppresses everything else you might subsequently learn as a result.

What if the Bad Guys get a hold of this result?

One worry that’s commonly brought up in these debates is that if bad people get a hold of these research results, they will do bad things with it. Daniel de Kadt, a political scientist at UC Merced, wrote a thread about the paper in which he brought up this concern:

In other words, maybe people like John Pfaff know how to use this research to craft better alternatives to modern-day incarceration or subtly tweak sentencing policy, but Ben Shapiro will see it and start screaming “SEEEE? MASS INCARCERATION IS GOOOOOOOD!!!” to his twelve bazillion followers. And then where will we be?

In fact, you can make a similar argument for almost any piece of research, especially for scientific discoveries. The same technology that can cure disease might be used to create bioweapons. The same chemistry discoveries that can create useful new materials can be used to blow people up. And so on.

It’s reasonable for scientists (including social scientists) to be concerned about the evil uses to which their discoveries might be put. But to suppress or modify those discoveries is akin to the Noble Lie — it’s an expression of a belief that you, the researcher, can predict the uses to which society will put your discoveries, and can thus control social outcomes by deciding whether to report what you’ve found.

Once in a while that might be true. If you’re working on a bomb, you probably know it’ll be used to bomb people. But even then, it’s really hard to predict all the consequences. The two nukes we actually used certainly caused horrific mass death, and someday a nuclear holocaust might wipe out much of human civilization. BUT, it might also be true that nukes enable Mutually Assured Destruction, which puts an end to great power wars and ends up saving untold millions in the final balance.

It’s just damn hard to know. And scientists, though they know a hell of a lot about science, probably don’t know much about the future ramifications of their discoveries. Maybe historians of science and political scientists have a little bit of predictive power. But these are just very hard things to predict.

That’s why the ideology of science requires a leap of faith. The ideology of science is about trusting humans to collectively, eventually do the right thing with the knowledge scientists give them — to ignore the Ben Shapiros and listen to the John Pfaffs. It’s humility on the part of individual researchers, combined with a faith in the responsibility of human society.

The “Everybody’s doin’ it” defense

When challenged with this perspective, de Kadt gave a defense that I often encounter in these debates. He claimed that instead of calling for the deliberate introduction of political bias into scientific research, he’s simply calling for acknowledgement of the bias that inevitably exists:

But this defense rings hollow. Being explicit and transparent about one’s moral framework is not actually what de Kadt and other critics of the AER paper are calling for.

If you really wanted to be explicit and transparent about biases and ideologies, you’d have researchers include a section with each paper divulging their own political leanings, disclosing all of their non-scientific personal affiliations, and introspecting on their own personal biases and moral beliefs. But as far as I can see, no one is suggesting anything like this. Instead, what people like de Kadt are advocating is for researchers to alter the way they do and present their research — to be more circumspect about writing down or publicizing certain research conclusions.

That’s not transparency at all. In fact, it’s the opposite of transparency. If you find that incarceration benefits kids, but you don’t write it down, or you don’t publish it, or you state it in a way that downplays the strength of the evidence as you see it, you are making your research methods less transparent.

Introducing political considerations into research methodology is not exposing bias; it is embracing bias. Those two things should not be conflated.

Should scientists be truth-seeking robots?

So what should scientists do? Should they become a caste of pure truth-seekers, removing themselves utterly from the world of politics and public affairs in order to pursue the facts in as objective a way as possible? Certainly not. Scientists are humans like everybody else, and they have a right — and perhaps even a duty — to try to make the world a better place through politics.

But in order to follow both the scientific ideology and their own political ideology at the same time, scientists have to manage a special sort of internal bifurcation. When seeking the facts they have to be as objective as they can, and faithfully publish what they find, but then immediately forget about that objectivity as soon as they’re doing anything other than research! This is, of course, an insanely hard thing to do. It might even result in scientists campaigning against the use of the very technologies they themselves invented!

But that’s what the job requires, I think. Scientist is a very hard job. One of the hardest. Not everyone is going to do it well. But they should try, anyway.

Update: A commenter helpfully reminds me of the excellent Liam Bright paper, “Du Bois’ Democratic Defence of the Value Free Ideal”, which makes similar arguments in a more erudite fashion than I am able to, and additional ones besides. From the introduction:

W.E.B. Du Bois…has a defence of value free ideals in science that is rooted in a conception of the proper place of science in a democracy. In particular, Du Bois argues that value freedom must be maintained in order to, first, retain public trust in science and, second, ensure that those best placed to make use of scientifically acquired information are able to do so.

Highly recommended!