If you're a CLASS observer, you've probably found yourself in a situation where you have to make inferences or rely on contextual evidence when assigning scores. However, it should always be your goal to minimize subjectivity and assumptions. You have to prevent your emotions, opinions, and ideas that are not a part of the CLASS tool from influencing scoring. Achieving an emotionless state of objectivity while observing can be incredibly challenging. It takes practice to recognize when objectivity is threatened and respond accordingly.
One of the most common threats to objectivity is the observer’s relationship with or previous knowledge about the teacher being observed. This is particularly salient for administrators who are observing teachers whom they know well and have worked with extensively.
It can be very challenging to conduct an observation without taking into consideration the known strengths and weaknesses or common practices of a teacher. Many an administrator has had a thought such as, “Well, I didn’t observe Mr. Clark to provide scaffolding or specific feedback, but I know he does that all the time, so I’ll just bump this Quality of Feedback score up a little to be more accurate.”
While understandable, this line of thinking is contradictory to one of the fundamentals of the CLASS—that scores for an observation are based only on evidence observed during the cycle being scored. It may be helpful in this situation to consider that every cycle of observation is a “snapshot” of the classroom environment. We don’t expect to see everything a teacher ever does in each twenty-minute cycle. CLASS scores must reflect the observed evidence from start to finish of the cycle, and nothing else.
Another challenge to objectivity is prior knowledge of teaching in general, such as best practices for a certain activity or lesson, or content-specific knowledge such as literacy skills practice or math instruction. The CLASS tool is extensively research-based, and attempts to comprehensively capture the types of interactions that lead to learning and student success.
However, there are elements of teaching that the CLASS tool might not always capture, like some aspects of lesson planning, fidelity to a curriculum, or adherence to school or district policies. An observer must do their best to set aside knowledge and ideas about teaching that are not a part of the CLASS tool in order to reliably score.
Once an observation and scoring are complete and it’s time to do something with the data, such as have a coaching conversation, then all knowledge is back on the table, and an observer/coach should use every piece of information they can to help the teacher improve.
One final adversary of objectivity is a simple human emotion: empathy. An observer who has experienced all the challenges of teaching, from forgetting coffee in the morning and being a bit short with a student, to struggling to calm a difficult and upset child, is likely to emotionally connect with the plight of the educator being observed.
This empathic connection can make it difficult to “ding” or “mark down” a score, because it feels unfair or harsh to “penalize” a teacher for not doing something in one twenty-minute snapshot of teaching. The solution to this problem is a philosophical examination of the CLASS tool itself. A low score on any given dimension of the CLASS in one observation is not necessarily a negative thing, but merely a reflection of a lack of evidence.
We might not expect or want to see evidence of a particular dimension during a specific observation cycle in many different situations. For example, 20-minutes of independent reading time will likely result in very low scores for many CLASS dimensions, due to a lack of interactions and conversation. These low scores don’t depict anything negative about the teacher or classroom. If a similar lack of evidence is observed across multiple cycles, then there might be an issue that needs addressing.
Understanding that CLASS scores from one cycle of observation don’t tell the whole story of a teacher’s skill, that low scores merely reflect a lack of evidence in that cycle, and that only once we obtain multiple data points can we draw any significant conclusions can help an observer avoid an emotional resistance to assigning low scores.
Have you ever meditated? One of the most challenging aspects of this practice is clearing your mind from day-to-day thoughts that pop into your head. If you meditate, you know that trying to push those thoughts away doesn’t work—in order to free your mind you must first acknowledge those distracting thoughts before you can return to your “moment of zen.”
Reliability testing is stressful? Right? Right! Especially when you are an Affiliate Trainer, and you must pass the test to maintain trainer status! So you want to make sure that you’re doing the best you can. You study for the test, put a “Do Not Disturb” sign on your door, and lock yourself in your office with your manual, score sheets, and pencils in hand. (Steaming hot cup of coffee or tea is optional).
At some point in every training, someone invariably looks up and says, “So, if I want to be reliable, all I need to do is never score a 1 or a 7, right?”
Wrong! Every time I hear this, I want to scream. However, since it’s poor form for the trainer to scream, I maintain my composure and calmly explain that although I understand the intuitive appeal of what I call “The Numbers Game,” I cannot recommend it for the reasons presented below: