Want to hear more?

Join our mailing list for updates about what we're up to.

Focusing on what matters, Part 1

Posted by on Oct 8, 2012 in General | One Comment

The Consortium for Chicago School Research is out with a new report on “noncognitive” factors affecting students’ academic performance. It’s a comprehensive review of the research literature on the extent to which a series of “noncognitive factors” (a misnomer, as the authors admit) such as academic behaviors, perseverance, academic mindset, learning strategies, and social skills are associated with students’ academic performance.

There’s a ton in the report worth noting, so I’m going to try and discuss different aspects of it over a couple of posts. For today, I want to focus on something that isn’t even a primary focus of the report but whose implications may be more profound. As noted above, the primary outcome of interest for the report is academic performance. So how did the Chicago Consortium decide to measure it? I’ll give you a hint: the answer is not state test scores, nor is it SATs.

They chose to use grades. Good old fashioned, teacher-determined, subjective, unreliable grades. Why?

Despite all the attention to standardized tests, a growing body of research shows that achievement test scores are not strong predictors of whether students will graduate from high school or college. Research on early indicators of high school performance finds that passing courses and GPA in the middle grades and even earlier in elementary school are among the strongest predictors of high school outcomes. Likewise, high school grades are stronger and more consistent predictors of college persistence and graduation than college entrance examination scores or high school coursetaking.

These folks are not ideologues. They have done a ton of great work in the past that makes use of state test data. They are highly respected in the research community. They didn’t choose grades because they have an axe to grind. They chose them because if one considers real outcomes like educational attainment and even long-range employment outcomes, they are a much better measure of students’ academic performance than the tests are. As the authors explain, this may be in part because grades encompass some of the noncognitive factors that seem to be pretty important to long-term success. But it’s also that the tests themselves are not especially good predictors of these things.

This raises some pretty important questions. If standardized tests as currently constructed aren’t terribly good at predicting the things we care about most, why do we use them to determine the efficacy of virtually all reforms? Why are we hellbent on using them to rate, hire and fire teachers? How much do they really tell us about school effectiveness?