If you're a CLASS observer, you've probably found yourself in a situation where you have to make inferences or rely on contextual evidence when assigning scores. However, it should always be your goal to minimize subjectivity and assumptions. You have to prevent your emotions, opinions, and ideas that are not a part of the CLASS tool from influencing scoring. Achieving an emotionless state of objectivity while observing can be incredibly challenging. It takes practice to recognize when objectivity is threatened and respond accordingly.
One of the most common threats to objectivity is the observer’s relationship with or previous knowledge about the teacher being observed. This is particularly salient for administrators who are observing teachers whom they know well and have worked with extensively.
It can be very challenging to conduct an observation without taking into consideration the known strengths and weaknesses or common practices of a teacher. Many an administrator has had a thought such as, “Well, I didn’t observe Mr. Clark to provide scaffolding or specific feedback, but I know he does that all the time, so I’ll just bump this Quality of Feedback score up a little to be more accurate.”
While understandable, this line of thinking is contradictory to one of the fundamentals of the CLASS—that scores for an observation are based only on evidence observed during the cycle being scored. It may be helpful in this situation to consider that every cycle of observation is a “snapshot” of the classroom environment. We don’t expect to see everything a teacher ever does in each twenty-minute cycle. CLASS scores must reflect the observed evidence from start to finish of the cycle, and nothing else.
Another challenge to objectivity is prior knowledge of teaching in general, such as best practices for a certain activity or lesson, or content-specific knowledge such as literacy skills practice or math instruction. The CLASS tool is extensively research-based, and attempts to comprehensively capture the types of interactions that lead to learning and student success.
However, there are elements of teaching that the CLASS tool might not always capture, like some aspects of lesson planning, fidelity to a curriculum, or adherence to school or district policies. An observer must do their best to set aside knowledge and ideas about teaching that are not a part of the CLASS tool in order to reliably score.
Once an observation and scoring are complete and it’s time to do something with the data, such as have a coaching conversation, then all knowledge is back on the table, and an observer/coach should use every piece of information they can to help the teacher improve.
One final adversary of objectivity is a simple human emotion: empathy. An observer who has experienced all the challenges of teaching, from forgetting coffee in the morning and being a bit short with a student, to struggling to calm a difficult and upset child, is likely to emotionally connect with the plight of the educator being observed.
This empathic connection can make it difficult to “ding” or “mark down” a score, because it feels unfair or harsh to “penalize” a teacher for not doing something in one twenty-minute snapshot of teaching. The solution to this problem is a philosophical examination of the CLASS tool itself. A low score on any given dimension of the CLASS in one observation is not necessarily a negative thing, but merely a reflection of a lack of evidence.
We might not expect or want to see evidence of a particular dimension during a specific observation cycle in many different situations. For example, 20-minutes of independent reading time will likely result in very low scores for many CLASS dimensions, due to a lack of interactions and conversation. These low scores don’t depict anything negative about the teacher or classroom. If a similar lack of evidence is observed across multiple cycles, then there might be an issue that needs addressing.
Understanding that CLASS scores from one cycle of observation don’t tell the whole story of a teacher’s skill, that low scores merely reflect a lack of evidence in that cycle, and that only once we obtain multiple data points can we draw any significant conclusions can help an observer avoid an emotional resistance to assigning low scores.
If you’ve been following the news lately, a lot is going on in North Carolina for young children and families! Leaders across the state—from businesses to state government to county municipalities—are leveraging partnerships that use research-based assessment and professional development models (like CLASS) to guarantee more of the state’s youngest residents have access to the high quality care they need and deserve.
Recently, I wrote about research showing us just how few children experience even “good enough” teaching from kindergarten to third grade. Only 4% of children in rural areas of North Carolina and Pennsylvania had access to good enough teaching during these critical early years and over 50% only experienced good enough teaching for 1 year or less.
Teachstone has been working hard for the past few months to provide you with case studies about various organizations who have transformed their classrooms through the use of the CLASS tool. We hope they help readers like you make informed decisions about some of the products we offer and introduce you to different ways you can impact teacher-student interactions.