We’ve been training new teachers for almost two decades, and if there’s one thing we know for sure, it’s that preparing teachers to serve students well from day one in the classroom is really hard. There aren’t any straightforward answers for how to make a lot of people good at one of the hardest jobs on the planet. But we also think it’s possible to do better—and part of that is measuring how well a training approach is translating into outcomes for kids.
For several years now, we’ve been studying the relationship between teachers’ performance in pre-service training and their performance once they begin teaching. To do that, we use our Assessment of Classroom Effectiveness (ACE) to measure new teachers’ performance based on observations, student surveys, achievement data, and principal ratings. And there’s good news: Stronger performance during pre-service training is correlated to stronger school-year performance. Perhaps more importantly, first-year teachers with higher school-year observation scores get significantly better student outcomes.
To teacher prep nerds like us, that’s a big deal. Why? Because now we know it’s possible to capture meaningful evidence of what matters most in new teachers’ classrooms. From there, we can make our teacher training better, based on solid evidence of what matters for kids. Being able to connect data from pre-service training to classroom performance is markedly different from what most of the teacher prep field—ourselves included—have been able to do in the past.
When we look at the mountain of data we’ve collected over four years, a couple big things stand out:
We can predict early on who is more likely to get results with students—and who won’t. A teacher’s performance in pre-service training is strongly correlated to their school-year performance—whether that’s measured across observations, student surveys, principal ratings, and even their value-added scores. This means teacher performance at the end of a short, intensive training program is truly predictive of how they’ll do down the road, and how well their kids will do. This opens up new opportunities for how we train new teachers and how we make hiring decisions—since it allows hiring principals to more accurately gauge which new teachers are most likely to be successful. That’s something principals can’t do with selection criteria based on paper credentials or mindset, which don’t have the same relationship to success with students.
Looking at what students are doing is the right approach to teacher observations. Two years ago, we released TNTP Core, an open-source, Common Core-aligned teacher observation rubric that focuses on what students are doing and the content in front of them to assess teacher performance. It’s what we use for teacher observations across our training work—and increasingly, we think it makes sense to put it at the heart of our training. Here’s why: TNTP Core allows us to look not for things we think teachers should be doing, which will look different in different contexts, but rather at the things we know students should be doing if Common Core-aligned instructional shifts are alive and well in the classroom (like trying hard to complete academic work, a key learning behavior). And we’re seeing promising early signs that TNTP Core is focusing on the right skills: Teachers who receive higher scores on Core also receive higher student survey scores and higher value-added scores. We’re currently putting the rubric to the test in an external study that will validate its use as a predictor of stronger student performance.
So what are the implications for our work as we move forward?
I used to think we needed to create the perfect model for teacher preparation, and once we did, we’d be done. But looking at data like this for the past four years, I no longer think that’s the goal. I think the data suggest it’s time for a dramatic rethinking of how we approach teacher preparation—one in which aspiring teachers develop deep content knowledge in their subject areas as undergraduates and then get classroom experience early and often, and which measures how they’re doing with students from the get-go. Once we’re doing that, we can continue to evolve the particulars of teacher training and support, based on what the data is telling us.
I no longer believe one static preparation program will ever get us where we want to be. Now, I see that a flexible, reactive model fueled by data that measures the ultimate outcome—student learning—is the way to develop teacher training programs that really serve teachers and kids.
This means planning to always make changes. When we talk about promising new models—from teacher residencies to district-led programs—our field as a whole should be pushing for evidence of what works. Let’s test how these new models are actually getting results for kids, and make choices about where to invest valuable resources based on what the data tells us.