<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1441829102512164&amp;ev=PageView&amp;noscript=1">

classroom-low-scoring.jpg

While it’s gratifying and inspiring to observe warm, nurturing, and instructionally dynamic classrooms, unfortunately that’s not always the case. I received an email the other day from an observer who conducts observations for a research project in middle and high schools. He was concerned about interactions he had observed, and wanted to make sure he was doing CLASS coding correctly. Since we have a blog on the website about classrooms scoring all 7s, I thought it would be important to bring up the other end of the continuum as well.

In his words, the classroom was “consistently chaotic.” He could barely hear the teacher because students were talking, screaming, arguing, banging things on desks, and wandering in and out of the room throughout the observation. He scored each dimension a 1.

Understandably he left feeling stressed about the situation for the students and teacher, for his own role as an observer, and wondering how to provide feedback in a way that was fair and useful. No question, it’s complicated.

What do you do when you feel yourself being sucked into the emotional climate of a classroom?

Trust the Manual

Regardless of what you observe during an observation, always, always, always use the CLASS manual when assigning scores. When you’re conducting an observation, your scoring companions are your notes and your manual.

Trust the evidence you collect (i.e., your notes) and reference those to the manual. Look at each dimension separately and decide whether or not your observations best match low, mid, or high for each of the indicators. Your score weighs the evidence for both the presence and absence of interactions, and the overall level of quality over each 20-minute cycle. Be systematic—even when it's hard—and have confidence in the scores you assign to each dimension at the end of each cycle.

And, did I mention—always use your manual!

 

Remain Objective

I often tell participants in my trainings to try to be a CLASS robot when scoring. As soon as you walk in the room, you are picking up on what CLASS measures, and nothing else. Consider yourself a scoring machine.

The manuals (at each age level) have an important section on challenges to the observer. Emotions are contagious, and if you’re swept up in extremely negative or positive interactions, it’s incredibly difficult to remain objective.  

Be aware of your own ideas and biases ahead of time. Remind yourself why you are there and why you are using the tool—it helps you stick to what you see rather than what you feel. Recognize what you loved, or disliked, and then put those emotions on hold and look to the manual to systematically pull in evidence the way the CLASS tool wants you to interpret the interactions.

Is this easy? No way.

But, using a robot approach and grounding yourself in the manual allows you to trust what you see and confirm that you're coding it objectively. Go dimension by dimension, step by step, to focus on each dimension separately and give credit where credit is—or isn’t—due.

 

Remember the Purpose

In the case described at the beginning of this post, it was tough, because the observer is part of a research project that is not working with teachers over time to build professional capacity. While he provides feedback reports to the teachers based on his observations, these are less likely to be connected to specific, targeted, intentional, CLASS-based coaching on an ongoing basis.

Research plays an essential role in growing a body of evidence on effective instructional environments. But in cases like this where help is so clearly needed, it can feel especially frustrating. No one wins when students or staff members are left alone in classrooms like the one described. Ideally high-quality data are linked to a larger cycle that help teachers learn, test, and grow stronger practices.

Regardless of whether you’re working in research, practice, or policy settings, our research papers are a great resource for grounding yourself in why and how CLASS is being used.

 

Observer Support

Conducting observations and sharing results is no easy task. It takes commitment to use the tool correctly, finding time to do it well, and then keeping perspective regardless of what you end up observing. This can be especially tough when you work closely with teachers over time as their professional coach—especially when high stakes are involved!

A little parallel process might help frame our situation as observers. Just like we don’t expect teachers to fix problems on their own, we observers need a collegial network to bounce ideas off of and make sure our CLASS lens isn’t drifting.

A few ideas for keeping data quality high include:

  • If you know another reliable observer in your setting or project, try to squeeze in a time for both of you to observe a classroom at the same time and compare notes and observations. This really helps you dig into the manual together and make sure you are still using the tool correctly.
  • If you don’t have access to other reliable observers, check out the Teachstone Observer Directory (updated monthly) to find other observers nearby. Maybe you can trade and co-observe a classroom in each of your programs? Consider adding your name to the Directory too—it’s easy and free!
  • Teachstone also offers CLASS Calibration sessions where you watch a video similar to the process in which you were trained, with score justifications and master codes to compare your results.

I know I sure appreciated his email as a reminder for why our collective work is so important. Teachers and students deserve support. Instructional quality is too important for not only student outcomes—but teacher outcomes too (like teacher retention)!

So we need to be able to 1) trust the data we collect by using the tool correctly, and then 2) build systems that link high quality data into strong and ongoing systems of professional support.

What’s your experience in handling situations like the one described here? What worked well? What’s still tough?

Thanks for your questions and reflections - keep ‘em coming! We’re all in this together, and learning as we go.

 

 New Call-to-action