Confronting the Professional Development Mirage
“We want to measure and improve the effectiveness of our professional development for teachers. So…where do we begin?”
In the year since we released The Mirage, we’ve heard this question a lot. We know there are education leaders in schools, district offices, and state agencies across the country who want to reevaluate the effectiveness of the professional development (PD) they offer teachers—but the prospect of doing so is incredibly overwhelming. Last year, we worked with Denver Public Schools (DPS) to take initial steps to measure PD effectiveness by monitoring the first-semester growth of about 200 new teachers who received one-on-one coaching.
The Mirage showed us that in spite of everyone’s good intentions, there’s no guarantee that more frequent, higher doses of coaching or other PD activities will lead to better instruction and student learning at scale. To measure PD effectiveness, many districts currently rely on teacher satisfaction surveys, but how teachers perceive their PD doesn’t tell us much about whether their instruction actually improves as a result. In The Mirage, we found that teachers who improve substantially are as likely to be satisfied with PD experiences as teachers who don’t improve.
There are missed opportunities when districts don’t measure improvements in the classrooms of teachers who participate in coaching or other PD. Absent good information, it’s harder for districts to isolate and replicate promising practices. It’s also less likely that districts can make nimble, real-time course corrections if teaching and learning aren’t improving as they’d expect.
So where should a district begin if they want to get a better handle on what works and what doesn’t? Through work in Denver and elsewhere, we are learning ways that districts can get the ball rolling on improving PD:
Prioritize early. When the question is where to begin, districts can start by prioritizing just one or two development initiatives for early measurement efforts—perhaps the highest-priority, highest-stakes, or costliest efforts they’re working on. In DPS, focusing on new teacher coaching made sense because the district now hires more than 500 new teachers annually, and students depend on them to teach well right away and stay as long as possible.
Compare growth among sub-sets of teachers. Districts can narrow even further than we did in Denver by offering specialized coaching or other development efforts to only a sub-set of teachers. This allows districts to compare the growth of teachers that participate to those who don’t—which will help determine if development efforts are making a real difference.
Use existing data. More than five years ago, DPS launched a multi-measure system for teacher evaluation. The district also had a system through which coaches tracked how and when they supported teachers. So when it came time to assess the impact of new teacher coaching, we started by looking at data from multiple observations intended to provide teachers with feedback on instructional practice. The data was already there—it was just a matter of using information from observations in a fresh way. When districts match existing data to the questions they want to examine, we advise them to compare the trajectories of teachers who do and do not participate in specific development efforts, to isolate the impact of those interventions.
Embrace ambiguity. No data set is free from imperfection. For example, for good reasons, some new teachers in Denver didn’t receive ratings for every indicator in every observation with the district framework. So our analysis only examined improvement on the indicators for which teachers received multiple ratings. We didn’t let the absence of data for some indicators stop us from dipping our toes and beginning to examine growth where we could.
Measure to manage now. Understanding how well PD is shifting teacher practice is certainly valuable for long-term planning and resource allocation. But it can also help district and school leaders manage PD more effectively starting right away. For example, our early analysis found that coaches over-prioritized classroom management in the beginning of the year, even though first-year teachers were more likely to receive “effective” observation ratings on classroom culture than instruction. In response to these findings, district coaches immediately shifted their approach to focus coaching on each individual teacher’s development needs, with a particular eye on instructional skills.
In Denver, the lessons from these early measurement efforts are already leading to refinements to coaching that will help new teachers get better faster and stay in the classroom longer. Earlier this summer, we helped the district launch a set of guidance for how coaches will support new teachers in the future. Coaches will now prioritize four essential skills, which were identified, in part, from the findings of these early measurement efforts.
Getting started with assessing the impact of PD on teacher practice can feel overwhelming, but waiting only causes districts to miss valuable opportunities to gain new insights. The bottom line is, there’s a ton of information to be gleaned from data that already exists or could be reasonably collected. That information can help improve PD immediately—and those improvements can have a positive effect on kids’ learning right now.
Want to read more stories like this?
Respond to this Post
Your response is sent to us via email.
Never miss a post.
Get the TNTP Blog delivered straight to your inbox.