We recently talked about why annual recertification is important (bottom line: it’s your yearly chance to test yourself against master-coded videos). But what about between recertifications? How do your observers ensure that they stay reliable throughout the year?


Calibrate with Master-Coded Video 

Calibration, like recertification, gives you a chance to check in with master-coded videos periodically to prevent drift.

  • In the middle of data collection, research groups may calibrate monthly or even weekly.
  • They also try to calibrate between rounds of observation so that coders stay sharp.
  • If you have rounds of observation scheduled throughout the year (e.g., fall and spring observations), try to time calibration to occur right before observers go back into the field, so that they can prove they’ve maintained reliability.

Double Code

Double-coding is when two certified observers code side by side and check their codes at the end of the day.

  • Research groups are often required to double-code 20% of classrooms to prove that they are reliable in the field.
  • Teachstone can send double coders to your site if you want observers to double-code with experts.
  • BUT you can also double-code with your colleagues in the field. Have both observers code the same classrooms at the same time, standing side-by-side to ensure that they have the same view of the classroom.
  • How many times should you double code? Aim for 10% of your observations. If you are not actively observing very much, schedule an hour or two of double coding whenever you can—once a month or every two months, if possible.

Debrief

Debriefing is part of double-coding, but it gets its own bullet point because, done right, it can be a powerful tool for maintaining reliability. (Thank you, Amanda Williford, for a recent reminder of how to do this right!)

  • Schedule time after double coding to debrief in-depth on at least two cycles.
  • Talk through every code that is off, even if it’s off by one point.
  • Use evidence from the observation and the manual to come up with consensus codes. Make a note of why you think the codes were off from each other. Did one person hear something the other missed? Was there confusion about how to code a particular interaction, or how much weight it should be given?

The best way to stay reliable is to code a lot and get a lot of input on your coding—either through feedback on master-coded videos, or feedback from your colleagues.

Do you have other methods you use?

 Discover how others use CLASS. Read the Case Studies.