“Why do you want to be a teacher?”
If you ask 100 aspiring teachers that question, you won’t get 100 distinct answers. You’ll get probably three or four, presented in 100 slightly different ways. It’s an important question, but a broad one—and it tends to elicit equally broad responses, like “I love kids,” or “I love learning,” or maybe “I got a great education and want to make sure others can too.”
Now, consider how those candidates might respond to this question: “Which of these statements best describes you?” They’ll choose from options like: “To be a successful teacher, I need high-quality professional development,” or “to be an effective teacher, I need engaged parents.” Then they might be asked to agree or disagree with statements like, “All students should be held to the same grade level standards,” and “teachers in high-need schools must be given equal resources to teachers in wealthier schools if they are going to be evaluated by the same standards.”
As prospective teachers answer a dozen more questions like that, you’ll see pattern, texture and nuance. When you step back and look at the whole of those small, specific nuggets of information, a holistic portrait will emerge.
It’s interview pointillism, and it’s something we’re thinking about as we refine the way we select TNTP Teaching Fellows. One interesting possibility: using computer-based interviews, at least at the outset of someone’s application to one of our training programs, to ask these sorts of questions. It’s an interview model that sounds inhuman and even a little scary, like the quest for machine-based efficiency gone too far. But we’re looking into computer interviews not because we’re trying to skimp on time, but because we think they might actually predict future teacher performance better than our old model, a daylong in-person interview.
We didn’t always think about selection this way—like many others, we worked hard to create a rigorous screening process to challenge candidates to show us their best. We were able to recruit and train exceptional individuals—but that didn’t mean that they went on to become exceptional classroom teachers. When we took a hard look at whether performance during the screening process was connected to classroom performance, we found very little relationship. So, as we wrote recently, we eliminated the in-person interview.
Now, we think of our selection process as having multiple hurdles that include an application screen and phone interview, followed by an evaluation during training. This training evaluation is where we get the best information about candidates’ potential because we see them in front of students over several weeks. But we haven’t stopped asking, what if there is a better way to predict performance earlier in the process? How can we ask better questions, sooner?
That’s where the multiple-choice questions could come in. Rather than relying on whatever meaning interviewers ascribed to an open-ended conversation, these “forced-choice” computer survey questions could give candidates—and us—a clear, specific and common set of terms and ideas to review regarding their skills, experience, aspirations and expectations for teaching as a career. They could replace part of our existing application screen to allow us to drill down on the handful of particular skills we know we’re looking for, like professionalism, critical thinking and receptivity to feedback, which our experience tells us that most effective teachers possess.
In the past, our candidates looked remarkably similar after completing the interview process. With limited differentiation among candidates, it’s hard to predict who will go on to be successful. By testing things that matter for teacher effectiveness, we would hope to arrive at a more robust picture of candidate potential.
Computerized interviewing is also objective, and while some critics might argue that it could result in the selection of candidates who are all the same, we think it could lead to more diversity in the teaching force. Traditional interviews are subject to well-documented bias. Discrimination based on race, gender or even appearances, among other factors, is a real concern, and computer-based interviewing could mitigate that issue. And its efficiency is important too, since we consider interviews as one of several strands of information we use to make decisions.
But interviewing and selection are art and science both. Would computer-based interviews take the art out of selection? Would they potentially turn candidates off, restricting our pool of potential Fellows? What do we lose when we don’t have an in-person meeting to get candidates excited about the prospect of our training? We’re still wrestling with these questions.
The idea of a computer predicting which candidates have the potential to become effective teachers might sound a little crazy. When combined with other components of our selection process, however, I think there’s a fair chance they might be able to do the job. Whatever tactics we eventually adopt, asking better questions and getting the right teachers into our pre-service training programs, where we can make the best decisions about who is entrusted with a classroom in the fall, is the clear and high goal we’ll continue to shoot for.