Thank you for your interest in TNTP. To get a fast response to questions about our organization, services or research, or to reach out to a specific staff member, please contact us using this email form.

186 Joralemon St., Suite 300 
Brooklyn, NY 11201
Main:  (718) 233-2800
Fax:     (718) 643-9202

If you have a media inquiry, please contact us using the form to the right.

TNTP Re-imagine Teaching

The Positive Impact of IMPACT

October 17, 2013 | by Dan Weisberg

Teacher evaluation is not destroying life as we know it in Washington, D.C. Public Schools. In fact, it seems to be doing exactly what it was designed to do: improve the quality of instruction students receive every day.

That’s the takeaway from a new study of the IMPACT teacher evaluation system in DCPS, conducted by highly respected researchers James Wyckoff and Thomas Dee. They found that IMPACT is helping teachers improve their instructional skills and is helping DCPS keep far more of its best teachers than its least effective teachers.

This study is a big deal, and not just for DCPS. IMPACT led the new wave of teacher evaluation systems that has now spread across the country. It was in development even before we published The Widget Effect and before the federal government launched Race to the Top. Being the trailblazer, it has served as a model for systems in many other states and districts.

The concerns that were raised about IMPACT when it debuted in 2009 are the same ones you’ll find in just about any other school district that’s preparing to launch a new evaluation system. Will more rigorous evaluations actually help teachers improve, or are they just tools for finding and dismissing low-performing teachers? Will tying stakes to an evaluation system based in part on student growth data demoralize good teachers and cause them to flee to other districts?

Four years later, we have some high-quality research to examine those questions. Wyckoff and Dee found that IMPACT really does help teachers improve. The best evidence: Teachers who were rated just below the “effective” threshold (where a failure to improve could cost them their job) or just below the “highly effective” threshold (where a small amount of improvement could make them eligible for significant bonuses or raises) improved by large amounts the next year, on average. 

Wyckoff and Dee also find that fears of IMPACT scaring great teachers away from DCPS simply haven’t materialized in actual retention patterns. In fact, teachers rated “highly effective” under IMPACT stay in DCPS at much higher rates than low-rated teachers. Because the teachers who leave the district are mostly the less effective ones, DCPS is usually able to replace them with more effective teachers. In 2011-12, the year Wyckoff and Dee studied, new hires scored an average of almost 30 points higher on IMPACT than the teachers they replaced (a big difference—larger than the gap between the a typical novice teacher and a typical veteran teacher). Stated plainly, when poorly performing teachers leave, kids win. 

The main takeaway here is that the DCPS approach to teacher evaluation—clear, high expectations for teachers, evaluations based on multiple measures of performance including student learning, a clear link between evaluation ratings and retention and compensation decisions—is making it more likely that the typical student in DCPS gets to learn from great teachers. This hard evidence provides hope that states and districts taking a similar approach to evaluation should see similar results, provided they are as rigorous about implementation as DCPS has been.

On a deeper level, this study is the latest evidence that rigorous evaluation and meaningful support for teachers go hand in hand. Districts don’t have to choose one or the other—in fact, they need to do both at the same time if they want to do either one well. Setting clear, high performance standards and evaluating teachers against those standards is a bedrock principle of cultivating better instruction. That’s partly because teachers, like other professionals, will rise to the occasion when you challenge them to raise their performance. But as any good principal could tell you, it’s also because real professional development starts with honest feedback about your current performance.

One final note: discussions about IMPACT often lack substance because DCPS is something of a Rorschach test for one’s general feelings about education policy. It’s worth slowing down to dig into this study with an open mind. The early success of IMPACT is not a reason for DCPS to declare total victory or for other districts to erect a policy shrine to it, as so many have done with Finland. But those who have opposed the work in DCPS shouldn’t brush aside such important research, either. 

More than anything, DCPS is an example of what it’s like to be a first mover. Those who forge ahead furthest and fastest often face criticism from those who say they are reckless. And was it possible that IMPACT could have backfired in all the ways its early critics feared? Perhaps. But the worst-case scenario should never be a sufficient reason to shelve a good-faith policy idea, especially in a system that fails to prepare seven out of 10 students to do college-level work. We learn what works and what doesn’t by putting ideas to the test in the real world. If DCPS had given in to the fear-mongering back in 2009, we would never have found out about the overwhelmingly positive impact of IMPACT.

Dan Weisberg