Using the Rubric
Collecting Evidence
A classroom observer’s first step is to observe and record an accurate, unbiased depiction of what is happening in a classroom.
Evidence from a classroom observation can be collected in just about any format, but whatever the format, it should be comprehensive and objective. The evidence should be comprehensive enough to paint an accurate picture of what is happening in the classroom, and it should also be factual and judgement-free. It should be made up of objective statements that tell the story of the lesson without reflecting what the observer thought of the lesson.
Low-Inference Note-Taking
Low-inference note-taking can be a good way to remove subjectivity from the observation. When taking low-inference notes on classroom instruction, think about the following Essential Questions from the TNTP Core Rubric:
- Are all students engaged in the work of the lesson from start to finish?
- Are students working with content aligned to the appropriate standards for their grade and subject area?
- Are all students responsible for doing the thinking in this classroom?
- Do all students demonstrate that they are learning?
Running Records
One strategy for taking low-inference notes is to create a running record of what you are seeing in the classroom. The goal of a running record is to take objective notes that describe exactly what actions teachers and students are taking. Running records can be formatted like the example below:
Time | Teacher Actions | Student Actions |
---|---|---|
1:23 | Teacher claps out a rhythm to get students’ attention. | Most students clap the same rhythm, stop talking in their groups, and turn to track the teacher. |
1:25 | Teacher uses transition procedure to collect student packets; procedure takes less than 30 seconds to complete. | Students pass their packets down the row and the student at the end of the row hands packets to the teacher. |
1:26 | Teacher directs student attention to PPT. | All students silently track the PPT. |
1:27 | Teacher says, “Please answer the question on your Exit Ticket and put it in my basket as you leave.” | Students answer the question: “Which events precipitated the Fall of Rome?” |
The notes above paint a clear and objective picture of what occurred in this classroom. Because the observer tracked the time and used specific details, such as quotes from the teacher, a reader can get a clear sense of what is happening. It’s also important to note that observers should also be examining student work and listening to students talk, so there are sometimes gaps in their running records. This example is very detailed; a running record could also be as simple as a a pseudo-transcript of what the observer sees and hears without the delineation of student/teacher actions or the time stamps. However, particularly when reviewing lesson videos, timestamps can be very helpful when discussing Core ratings with other observers.
Avoiding the Pitfalls of Running Records
We recommend the following solutions to avoid some of the common pitfalls that can occur when collecting running records:
Pitfall | Solution |
---|---|
Using opinion statements instead of objective statements. | Distinguish between low-inference statements and opinions: Identify key words that give away subjectivity: i.e., “I think,” or “I feel.” |
Using vague quantifying statements. | Replace vague quantifiers by capturing more specific evidence: i.e., “a lot of students raised their hands” vs. “17 of 20 students raised their hands.” |
Using “jargon” or “edu-speak”. | Swap evidence for edu-speak: Rather than say, “You differentiated by scaffolding questions during the mini-lesson,” identify the actual questions that the teacher asked. “What is the name of this shape? How is it different from a square or rectangle? Where in real life have you seen this shape?” |
Download Sample Observation Template
After collecting evidence, a classroom observer must match it to the appropriate performance area before assigning a performance level.
After collecting objective evidence and completing a running record, the next step is to match the evidence with the corresponding performance area on the TNTP Core Rubric.
Here’s an example of a running record, which notes the performance area each piece of evidence best corresponds with:
Time | Teacher Actions | Student Actions | Corresponding Performance Area |
---|---|---|---|
1:23 | Teacher claps out a rhythm to get students’ attention. | Most students clap the same rhythm, stop talking in their groups, and turn to track the teacher. | Culture of Learning |
1:25 | Teacher uses transition procedure to collect student packets; procedure takes less than 30 seconds to complete. | Students pass their packets down the row and the student at the end of the row hands packets to the teacher. | Culture of Learning |
1:26 | Teacher directs student attention to PPT. | All students silently track the PPT. | Culture of Learning |
1:27 | Teacher says, “Please answer the question on your Exit Ticket and put it in my basket as you leave.” | Students answer the question: “Which events precipitated the Fall of Rome?” | Demonstration of Learning |
Not every piece of evidence will neatly correspond to a performance area, but that’s OK! If you take a complete running record, you should still have an abundance of evidence to use when assigning your ratings and writing your evidence summaries.
The Core Rubric performance areas are designed to measure unique aspects of student learning, meaning that scoring a 4 or 5 in one performance area doesn’t necessarily mean that all performance area scores will be equally strong; conversely, scoring lower in one performance area doesn’t mean that the overall observation score will be low.
Additionally, you may find that one particular teacher action or “root cause” generates related but separate pieces of student evidence that fits into different performance areas. The key here is to recognize the root cause and ensure that the student-focused evidence you document for each performance area fits the language of the descriptors.
How do you assess the quality of the content in a lesson?
It can be difficult to determine if the content of the lesson you’re observing is grade-level appropriate, but below are some resources that can help.
ELA and Literacy-Based Classes: Text Complexity Toolkit
Text complexity is measured both quantitatively (e.g., Lexile measure) and qualitatively (e.g., evaluating the complexity of themes, plot progression, etc.). An ideal analysis will consider both factors when determining whether a text is worthwhile and appropriate for its target student audience. The Text Complexity toolkit below walks teachers and observers through a suggested framework for evaluating whether a text is grade-level appropriate and moves students towards mastery of college and career-level standards.
Read through the Text Complexity Toolkit.
Math Classes: Focus in Mathematics Toolkit
The Focus in Mathematics toolkit provides resources for determining whether the content you’re seeing students engage with is part of the major, supporting, or additional work of the grade—or not part of the work of the grade at all. We recommend looking at both the math content and practice standards when evaluating essential content in mathematics.
Read through the Focus in Mathematics Toolkit and explore the resources available to you for helping determine the Essential Content rating in a Math class.
Science Classes
Not every state has adopted the Next Generation Science Standards, but we recommend finding and bookmarking the standards for Science in your state and attempting to locate a standard that aligns with the content being taught during every observation.
After collecting and bucketing evidence, the next step is to assign ratings to the observation.
There is no perfect system for assigning ratings, but here’s a baseline procedure:
Step 1: Look at the evidence in your running record that you matched to each performance area. Pull this evidence down in the “evidence summary” boxes on your observation report.
Step 2: Consider whether there is enough objective evidence to confidently rate on this performance area.
Step 3: If evidence is lacking, add to it!
Step 4: Using the evidence you’ve collected, assign a whole-number rating to each performance area.
Practice
Watch the videos below and practice assigning ratings for each lesson. Remember to collect and then bucket evidence first. Once you’ve assigned performance ratings, compare your ratings to the master ratings.
Take careful note of how the evidence statements align to the language of the rubric and to the performance areas, and think about how the evidence summaries use the detailed evidence from the running record to provide a clear picture of what was happening during the observation.
Want more practice?
We have over 150 teaching videos you can use to practice assigning ratings. Keep in mind that these videos do not come with master ratings and are not available for download. Please also remember as you critique the videos to be respectful of the teachers who were generous enough to share their classrooms with us.
Video Table of Contents (Download this spreadsheet to get a sense of what’s available in the playlists listed below).
Learn how to use this guide to create a strong norming program that will help your staff use the Core Rubric consistently.
If you’re introducing the TNTP Core Rubric to your school or district for the first time, you’re likely thinking about the best way to use this Core Rubric Observation Guide to norm—that is, familiarize—your staff with the tool. This section will help you develop and implement a strong training and norming program that fits your team’s needs.
Defining the Goal of Your Team's Observations
The first step in creating a strong norming program is to define—and make sure everyone understands—the purpose of the observations. Will they help inform teacher development? Will they be part of formal evaluations?
After you establish your main goal, set your norming goal for observers. If, for example, you want to offer teachers high-impact professional development, your training goal might be to make sure observers can consistently identify teachers’ biggest growth opportunities.
If you are using the TNTP Core for formal evaluations, you may want to focus on inter-rater reliability across observers so that observers can use the rubric to deliver consistent, accurate ratings. Inter-rater reliability has two components:
- Accuracy: Can observers assign the correct rating?
- Consistency: Can observers regularly assign the same rating across similar circumstances?
The Elements of a Strong Training and Norming Program
When designing your norming program, we suggest focusing on three key components:
1. Academic Content
The TNTP Core Rubric asks observers to focus on whether the academic content students engage with is appropriately challenging and aligned to grade-level standards. A strong program will help observers understand what appropriate content looks like in each subject area and allow them to practice evaluating content. TNTP observers are trained to use tools like the Mathematics Toolkit and the Text Complexity Toolkit to evaluate content accurately.
2. Practice and Feedback
A few hours of group discussion or a close read of the rubric is not enough to successfully norm observers: they need plenty of opportunity to practice and receive feedback on their observations. Our observers watch and rate no fewer than seven full-length instructional videos during initial training and then “check in” at three to four points during the year to rate and discuss additional lesson videos or co-observe in classrooms. Overall, they get about 40 to 50 hours a year of observation practice.
3. Alignment
Facilitators should focus on helping observers score lessons as similarly to the normed ratings as possible. At TNTP, observers who conduct high-stakes evaluative observations must meet a minimum threshold of alignment to normed ratings. See below for more on our standard norming “bar” for the TNTP Core Rubric.
Using the Core Observation Guide to Norm Your Staff
The Core Observation Guide provides an in-depth overview of the Core Rubric, as well as opportunities to practice assigning ratings on lesson videos and compare your ratings to TNTP’s normed ratings. If you want to develop a robust training plan, we suggest following the sequence outlined below. Most of the training time should be spent in the Assigning Ratings section, where observers can rate and discuss real lessons.
Part 1: Understanding the Rubric
Understanding the Rubric can be assigned as independent study, or you can review and discuss the content as a group and then facilitate a team discussion. When discussing each performance area, we strongly recommend asking participants to highlight the specific, concrete evidence each performance area asks observers to look for, as well as the evidence that pertains to other parts of the rubric.
We also recommend moving through the sections in the following order:
- Culture of Learning
- Essential Content
- Academic Ownership
- Demonstration of Learning
- Performance Levels
- Exemplar Lesson
After reviewing this section of the Guide, observers should have a baseline understanding of what each performance area evaluates and be able to articulate what practice at each performance level looks like.
Part 2: Using the Rubric
Once your observers understand the rubric, they can use the following sections of the Guide to learn how to collect evidence from a classroom observation and use it to assign ratings:
- Collecting Evidence and Bucketing Evidence show observers how to collect the right evidence to rate fully on the Core Rubric.
- Evaluating Content provides tools for evaluating academic content. This list is not comprehensive, so if your school or district uses other tools to evaluate content, you should highlight those here.
- Assigning Ratings provides six lesson videos with normed scores generated by TNTP staff to help you better understand where TNTP holds the bar for Proficiency on the Core Rubric.
The bulk of training time should be spent watching, taking notes on, rating, and discussing the provided instructional videos. These lessons are all generally higher-scoring on the Core Rubric.
When rating lesson videos, observers should:
- Independently watch, take notes, assign ratings, and record their evidence supporting those ratings
- Discuss their ratings and rationale in small groups or with a partner
- Compare their ratings and rationale to the normed ratings through facilitated discussion
- Ask questions and identify areas of misalignment between their ratings and the TNTP normed ratings
Keep in mind that practice rating high-performing lessons is not enough to truly understand how to use the rubric. Observers must also practice observing lessons of different grade levels, subject areas, and performance levels. Take advantage of TNTP’s large video library to develop this portion of your norming program:
Part 3: Additional Practice
Practice rating lessons of varying performance levels will help observers solidify their understanding of the Core Rubric and their ability to assign accurate, consistent ratings. We’ve found that differentiating between the lower performance levels (e.g., a strong 2 vs. a weak 3) is much harder than recognizing strong performance, so we strongly recommend at least 8 to 10 hours of practice with our videos or in-classroom experiences.
These videos do not include TNTP’s normed ratings, so you’ll need to assemble a team of instructional leaders to develop normed ratings and a rationale for the videos you select. This is a great opportunity for instructional leaders to come together around a common definition of strong practice before sharing with trainees.
The Meaning of "Normed"
Before TNTP observers can conduct evaluative observations, they must meet our established norming threshold of 75 percent “within one” alignment and 50 percent “exact match” alignment, which we call the 75/50 rule. This means that when an observer generates four ratings on TNTP Core, three of those (or 75 percent) must be no more than one rating level removed from the master rating. Of those three, at least two (50 percent) must be an exact match. Keep in mind that we apply this standard across all of an observer’s practice ratings, not individual ratings.
Example: Applying the 75/50 Rule
Ana is facilitating a norming conversation with Amy and Jeff on a third-grade math lesson. Here’s how their ratings compare to the master rating:
Culture of Learning | Essential Content | Academic Ownership | Demonstration of Learning | |
---|---|---|---|---|
Master | 3 | 2 | 3 | 2 |
Jeff | 3 | 3 | 3 | 2 |
Amy | 4 | 4 | 5 | 2 |
Jeff matched the master rating three times and was one removed from the master rating on Essential Content. Amy matched the master rating on Demonstration of Learning, was one removed on Culture of Learning, and two removed on Essential Content and Academic Ownership. Their norming results:
Within One | Exact Match | |
---|---|---|
Jeff | 4 of 4 (100%) | 3 of 4 (75%) |
Amy | 2 of 4 (50%) | 1 of 4 (25%) |
Therefore, Jeff is normed on this video and Amy is not. This example is based on an individual rating, but for inter-rater reliability, we would normally look at their average ratings throughout initial training.
If you plan to use TNTP Core for teacher development, you may need different metrics for success, like how frequently an observer can identify a teacher’s biggest opportunities for growth. Whatever your goal, a clear bar for success is critical in ensuring your staff collect valid, reliable observation data.
Best Practices for Facilitating Norming Conversations
Based on our years of experience norming evaluative observers at TNTP, here’s what we recommend for facilitating a good conversation:
Preparation is key. Review observers’ ratings in advance and strategically plan your conversation. Is there a particular performance area in which a large number of observers are misaligned? Look for patterns in their rationale that surface incorrect interpretations of the rubric. Those misunderstandings should be the meatiest part of your discussion.
Anticipate the devil’s advocate. Explaining why the rating is not one higher or one lower than the master rating is often just as powerful as explaining the rationale for the master rating. Be prepared to talk about both the disconfirming and the confirming evidence.
Facilitate, don’t dictate. Norming should be a discussion, not a presentation: Observers should be active participants in their own learning, sharing their rationale and pushing one another to support their thinking with objective evidence. Listen closely to observers’ justification for their ratings, especially when they don’t align, to better understand their misunderstanding.
Look for the root cause. Misalignment happens for two main reasons: 1) observers collect insufficient or biased evidence, and 2) they misinterpret the performance descriptors or essential question. Listen closely to the evidence and rationale observers share to diagnose the issue.
Keep the boat on course. Allow your observers to actively drive the conversation, but don’t hesitate to reinforce the right answer. Be prepared to fully explain and justify the master rationale.
Ensure that “right is right.” Push observers to justify their ratings and rationale with objective evidence from the lesson. Avoid assumptions, projections, or other assertions unsupported by student or teacher actions.
Summarize and generalize. The goal is to ensure that observers walk away from a norming conversation with a clear understanding of how to replicate key judgments in new situations.
Ongoing Practice
Norming doesn’t end after initial training! TNTP observers have at least three to four “checkpoints” over the year to practice so their observation scores don’t begin to inflate or deflate over time. Ongoing norming can happen through independent practice on normed instructional videos, live co-observations with instructional experts, or group norming exercises. Whatever the format, the more opportunities observers have to use the rubric and discuss their rationale, the better the quality of their observations will be.
More Resources
For additional information on developing a strong training/norming program, we recommend reviewing the resources linked below. These articles are based on learnings from the Measures of Effective Teaching Project, sponsored by the Bill and Melinda Gates Foundation and published in 2012.
Joe, J., Tochi, C., Holtzman, S., & Williams, J. (2013). Foundations of observations: Considerations for developing a classroom observation system that helps districts achieve consistent and accurate scores. Princeton, NJ: ETS.
Kane, T. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains. Seattle: Bill & Melinda Gates Foundation.
McClellan, C., Atkinson, M., & Danielson, C. (2012). Teacher evaluator training & certification: Lessons learned from the Measures of Effective Teaching Project. Teachscape.
Ready to get started?
Let’s Work Together
Learn how we can partner to make meaningful change for your school.
Stay in the Know
Sign up for updates on our latest research, insights, and high-impact work.
"*" indicates required fields