This week, we released our paper “Leap Year,” which shares our experience in overhauling our teacher-preparation programs to link certification and classroom performance. I am excited about the work that led to “Leap Year,” not only because it has made our teachers and programs stronger, but because I believe our approach has broader implications for the field of teacher preparation.
First, some backstory. I joke that I am an old-school education reformer—I started teaching more than 20 years ago as a charter Teach For America corps member, and led TFA’s pre-service training in the early days. We struggled to measure the impact of our training because it was difficult to get timely data on teachers’ actual performance in the classroom. Instead, we had to use weak surrogates like participant surveys. It wasn’t adequate, but it was the best we had.
The process for measuring the success of teacher pipeline programs did not improve much in the next 10 years or so. When I came to TNTP in 2006, we had ready access to proxy measures like surveys and retention rates, but only occasional access to high-quality research that compared the performance of our teachers to the performance of teachers from other preparation routes.
To the degree we had information, it told us we needed to be better. The teachers we certified through our alternate-route programs were not as effective at raising student achievement as our mission demanded. Despite our rigorous recruitment standards, research told us they were about as good as those from other programs. While we were far from the only teacher pipeline program with plenty of room to grow, I am proud that we refused to accept mediocrity. (You can read more about that here.)
This is why we developed ACE, or the Assessment of Classroom Effectiveness, to evaluate first-year teacher performance through classroom observations, student surveys, student achievement data and principal ratings. As you can read in “Leap Year,” we grant certification only to teachers who demonstrate effectiveness on the job—not based on seat time or course credits or passing a paper-and-pencil content test.
ACE provides a high-quality stream of data from multiple measures about our teachers. It is what I wish I had when I was training TFA corps members almost 20 years ago. With ACE, we know which skills our teachers have mastered and which skills they need to refine. We hear feedback directly from their students about what it feels like to be in a classroom led by a Fellow. We can make quick decisions to stop things that don’t work and concentrate on things that do. (We plan to share more about the new training strategy, Fast Start, that grew out of ACE, in future posts.)
I see this as the future of teacher preparation. Programs including ours should have a deeper relationship with the teachers they prepare even after they go into the classroom. There are two benefits. First, we evolve our programs more quickly and intelligently if we have multiple forms of strong data on how our teachers are doing. Second, we can ensure that only effective teachers make a career in our profession.
In the past, it’s been acceptable for preparation programs to do the best they can, and then let districts decide which teachers progress to tenure. That approach hasn’t worked too well. As we’ve shown in a number of policy reports, nearly anyone who elects to stay can stay in teaching. Preparation programs and schools should be working together to build a higher standard.