What Andy Smarick Gets Wrong on What The Mirage Gets Wrong
Yesterday on the Fordham Institute’s blog, Flypaper, our friend Andy Smarick shared some reflections on The Mirage, our recent report on teacher improvement. Our finding that the enormous investment school systems make in teacher improvement isn’t actually helping most teachers improve tends to send people into something resembling the five stages of grief. We experienced it ourselves. Andy readily admits that he’s still stuck on “denial,” and from there he raises a big question we’ve heard in other critiques of the report: Can we really trust the measures of teacher performance we used to reach our conclusions about professional development?
Andy knows the ins and outs of teacher evaluation as well as anyone, so we respect his healthy skepticism on this front. Before I address his specific concerns, though, it’s worth pointing out that our findings about teacher development aren’t as dire as he and others have made them out to be. In our research, we found thousands of teachers who improved from year to year. Clearly, some kinds of professional development are helping individual teachers. The problem is that at the systemic level, these teachers are the exception instead of the rule.
That brings us back to Andy’s questions about our methodology. He’s right that nobody has found a perfect way to measure teacher performance, and that many evaluation ratings aren’t as accurate as we’d like them to be (often because they’re inflated). That’s true even in school systems that have worked hard to improve their evaluation systems in recent years, like the districts we studied.
But evaluation systems don’t have to be perfect to give us meaningful trends, especially when we’re studying thousands of teachers. In addition to analyzing overall ratings, we looked at individual measures like value-added data and observation scores—even scores for specific skills. That helped us discover, for example, many veteran teachers hadn’t yet mastered crucial instructional skills like student engagement, even though they earned a high overall evaluation rating. Most tellingly, all the measures we looked at pointed in the same direction—toward most teachers not improving substantially over time. And it’s not just us seeing these patterns: Our results square with the most recent large, randomized, controlled studies on the issue by the American Institutes of Research.
All of this gives us a lot of confidence in our findings. Still, there’s a broader point that I believe Andy and many others are missing. The root of the problem here is our collective failure to even try to measure the impact professional development has on teacher performance in the first place. Almost nothing school districts are doing to help teachers improve is connected to specific goals around changing teacher performance or student achievement—even though changing those two things is, presumably, the whole point of professional development. The entire effort is basically a journey without a clear destination.
In other words, it would be great if the biggest challenge around professional development were the exact teacher performance measures we should use to evaluate it. But we’re not even there yet. We still need school systems to ask the basic questions that these measures could help answer: Is the professional development you’re providing actually helping teachers improve? How do you know?
The good news is that those questions suggest a very practical path forward. Surely we’d be in a better place if, for example, school systems got concrete about what great teaching looks like (as Andy suggests), and made sure teachers and school leaders bought into that vision. We’d be better off if we started setting measurable goals for teacher development, aligned toward that vision of excellence, and kept track of which initiatives actually meet them. And we would do well to shed the long-held assumption that we know how to help 3.5 million individual teachers become masters of their craft, and started considering some new ideas about what schools or the teaching profession itself could look like—ideas that could have a much broader impact on instructional quality.
We appreciate our readers and colleagues in the field putting our findings and methodology to the test. The questions Andy and others have raised are all fair game. I don’t fault him for his skepticism about our findings. But I do have a question: Putting the data aside, do you think the right path forward for school systems is to maintain the status quo on teacher development?
Want to read more stories like this?
Respond to this Post
Your response is sent to us via email.
You might also like
Never miss a post.
Get the TNTP Blog delivered straight to your inbox.