When evidence isn’t
Just read a fascinating article in last month’s Atlantic by David Freedman about the work of John Ioannidis, a physician and “meta-researcher” (he studies other people’s research) whose work has found that most published medical research – including a sizable portion of randomized, controlled trials – is sufficiently biased and error-prone that it should not be used to make clinical decisions. There are different reasons for this, including the inherent complexity of the topic, strong disincentives for researchers to question each others’ work or publicize null findings, and a raft of methodological challenges.
One of those challenges is that what gets measured as an outcome in a lot of research is only loosely related to the actual outcome of interest. Freedman explains:
Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.
And so it becomes blatantly obvious why I would comment on this in a blog about schools.
Current research on high schools is focused on three things: test scores, dropout rates, and graduation rates; measures with considerably less empirical validity than the health markers described above. A smaller body of work considers whether students matriculate into some type of postsecondary education, or if they are able to get jobs. A fraction of that work actually follows students to see if they stay in those jobs, or complete their postsecondary education. And an even smaller portion asks the question of what happens after that. Factor in that a bunch of this work is junk (longitudinal research of this sort is plagued by high rates of attrition and weak instruments that privilege what can be easily measured over what might actually be important), and it’s fair to question whether research (and policy) focused on these basic educational “markers” has any basis in evidence at all.
If we take our work seriously, the outcome that really matters is whether our students go on to live healthy, productive, and (hopefully) happy lives. It only takes a small step back from the rhetoric to recognize the absurdity of substituting test scores and graduation rates for these things, yet we steadfastly refuse to do it.
Why? For one thing, taking a longer view forces us to recognize that a lot of things besides school influence the trajectory of a person’s life. We want to believe schools are the answer – and lately, that schools are the problem – so it’s best to avoid this conclusion. For another, we fall prey to the same systematic bias that the medical community does. There is a huge incentive, actively reinforced by federal education policy, to identify and highlight “what works.” But to find things that “work” unequivocally enough to make such claims, we have to narrow (and ultimately distort) our definition of working. So it is with research on new drugs (or new uses of old drugs), and so it is with school reform.