This post first appeared on The Thomas B. Fordham Institute’s Flypaper blog.
“Programmatic series of studies”—that’s how one of my psychology professors described research on learning and memory around twenty years ago. Do a study, tweak it, try again. Persist.
I was reminded of that while reading Learning to Improve: How America’s Schools Can Get Better at Getting Better by Tony Bryk and colleagues. After thirty years of constant reform and little improvement, it’s clear that there’s a fundamental flaw in how the education field goes about effecting change. Quick fixes, sweeping transformations, and mandates aren’t working. Ongoing professional development isn’t working either.
What might work much better is a sustained, systemic commitment to improvement—and a willingness to start with a series of small pilots instead of leaping into large-scale implementation. Guided by “improvement science” pioneered in the medical field, Learning to Improve shows how education could finally stop its reform churn. As Bryk et al. write:
All activity in improvement science is disciplined by three deceptively simple questions:
1. What specifically are we trying to accomplish?
2. What change might we introduce and why?
3. How will we know that a change is actually an improvement?…
A set of general principles guides the approach: (1) wherever possible, learn quickly and cheaply; (2) be minimally intrusive—some changes will fail, and we want to limit negative consequences on individuals’ time and personal lives; and (3) develop empirical evidence at every step to guide subsequent improvement cycles.
That sounds an awful lot like schools across the country engaging in a programmatic series of studies—a change that likely would result in huge improvements. Even better, the book explains how educators can form networks to grow together. Progress is much faster with pilots in multiple locations, as adaptations for each context generate ideas for further tests.
This application of improvement science seems to be the best possible path forward. But it still suffers from a (perhaps inevitable) problem—you don’t know what you don’t know. An example of this problem is sprinkled throughout the book: The Literacy Collaborative is profiled as a network of educators improving their reading instruction. I don’t doubt that their instruction is improving and student achievement is increasing. I also don’t doubt that even better results could be attained with an entirely different approach.
The Literacy Collaborative is dedicated to guided reading, which begins with the teacher selecting a leveled text. As Tim Shanahan has explained, there’s no real research base for leveled readers. The whole notion of assessing a child’s reading level and then selecting (or letting the child select) a text at that level is essentially a farce. Once children are fluent in sounding out words, their reading level primarily depends on their knowledge level, which means it varies by topic.
Neither today nor in the future called for by Learning to Improve is there a way to guarantee that the improvement process begins with the best possible ideas. But improvement science may still be our last best hope. The type of slow, steady progress that would result from widespread application seems to characterize the few examples we have of sustained and, eventually, dramatic improvement, such as in Massachusetts, Finland, and Singapore:
Think of a future in which practical knowledge is growing in a disciplined fashion every day, in thousands of settings, as hundreds of thousands of educators and educational leaders continuously learn to improve. Rather than a small collection of disconnected research centers, we could have an immense networked learning community.
The book’s vision is ambitious—and far more likely to succeed than the reform churn we’ve tolerated for decades.