Research has long examined the different ways in which students gain from early childhood education, but two new studies from Tulsa have shown some new areas of gains in Head Start Programs, as well as school readiness gains being closely predicted by the CLASS tool. While variation between classes and schools continue to be a problem in early childhood education outcomes, CLASS is driving schools towards greater success.
In 1998, Oklahoma was the second state to implement universal pre-k and is currently serving 70% of 4-year-olds. In Tulsa in particular, this takes two forms: one program is the Community Action Project (CAP) Head Start, and the other program is run through Tulsa Public Schools (TPS). The TPS program covers all students regardless of income, but is limited to four-year-olds, while the CAP program has typical Head Start student limitations, but is open to three-and four-year-old students.
A recent study followed children who attended the CAP Head Start Program at age four and compared their school performance in middle school with their peers who had not attended the TPS or CAP programs. [i] Researchers found significant gains in student attendance and reduction of grade retention. As eight graders, students who participated in CAP were 34% less likely to be chronically absent and 31% less likely to have been retained a grade.
Not only does this study provide a window into how the long-term benefits of preschool, and Head Start in particular, can last, but it also indicates different areas where this is possible. Both absenteeism and grade retention are serious obstacles for individual students, as well as for schools. These gains are indicative of some of the other spillover effects that have been demonstrated from early childhood education in middle school and into adulthood.
Still, the failure to have correlation between other academic indicators and CAP participation raises the question, what is holding these programs back from greater success? The second study produced in Tulsa may have a better insight into this consideration.
The second study examined participants the TPS program and examined the variation between schools within this system.[ii] Specifically, it examined the correlation between the CLASS Instructional Support domain and kindergarten readiness as exemplified by performance on the letter-word ID, spelling, and applied problems sub-tests of the Woodcock Johnson.
The most significant finding was that an increase of about 0.67, one standard deviation in the study, on the Instructional Support domain was correlated with a 14%, 12%, and 23% score improvement on the letter-word ID, spelling, and applied problems sections respectively. It is also important to note that Tulsa has uniform standards on metrics such as teacher education level, classroom size, student to teacher ratio, and teacher salary. Given that Tulsa TPS programs had an average of 3.25 in Instructional Support, with an average score of 2.86 on Concept Development, 3.34 on Quality of Feedback, and 3.57 on Language Modeling, there is clearly room for improvement.
While 2.88 is the Head Start average for Instructional Support, the top end of the scale is 7. Providing that the increases on Woodcock Johnson scores do not exhibit diminishing returns, this means that increases in Instructional Support could mean score increases as high as 78%, 67%, and 128% on the letter-word ID, spelling, and applied problems sub-tests. Though it is unlikely that even a classroom providing perfect Instructional Support would see average gains that high, it does indicate the very large potential for improvement that CLASS provides.
Ultimately, the study demonstrated that CLASS scores can predict the variation among preschools and that improving CLASS scores presents an opportunity to improve school readiness.
Together, these studies demonstrate how much of an impact Oklahoma’s commitment to early childhood education has had, but also how far it has to go. Despite promising improvements to students in middle school, the variance among Tulsa’s TPS programs indicate that a universally high level of quality could mean much better results in the short and likely long-term effects. Additionally, it indicates that structural components like class size and teacher education are not enough to determine the success of a given program. Still Oklahoma’s use of a Quality Rating and Improvement System (QRIS), though voluntary, provides an optimistic outlook that quality is likely to follow access to early childhood education in Tulsa and beyond.
Joseph Tomchak, Strategy Intern is from NJ. He's a senior at Yale University, majoring in Political Science. He has experience in early childhood policy and international human rights research. He has also served as a teacher in a Head Start classroom during a summer program.
[i] Phillips, D., Gormley, W., & Anderson, S. (2016). The effects of Tulsa’s CAP Head Start program on middle-school academic outcomes and progress. Developmental Psychology, 52(8), 1247.
[ii] Johnson, A. D., Markowitz, A. J., Hill, C. J., & Phillips, D. A. (2016). Variation in impacts of Tulsa pre-K on cognitive development in kindergarten: The role of instructional support. Developmental Psychology, 52(12), 2145.
“How would you structure your classroom schedule?”
The first time I interviewed for an early childhood teaching position, this question stumped me. As straightforward as it sounds, I hadn’t really thought about it before! There are so many factors to consider: What activities do my students like? How do they learn best? How do I fit in the activities that licensing or my education director think are important? How do I align these with learning standards or my students’ goals? And, realistically, what are my strengths as a teacher?
The CLASS measure allows us to quantify the quality of teacher-child interactions—and that is a powerful thing. But collecting observation data, alone, does nothing to impact students. Improving child outcomes takes more than just data collection; it’s what you do with the data that really matters.
Welcome to our newest blog series dedicated to the research we're reading and thinking about.
The last time I was at a family function, I was excited to catch up with my 15-year-old cousin. I hadn’t seen him for a while, and I was ready to get clued into the high school world. Sadly, he had other plans, most of which involved watching YouTube videos and responding to my questions with, “sure,” and “cool, Allie.”
Have you ever thought that the CLASS tool seemed subjective? Perhaps you’ve coded with another certified observer and come up with very different scores for the same classroom? Maybe you’ve struggled with the reliability test or CLASS Calibration and felt that it was due to you seeing the classroom in a different light or interpreting certain situations differently? You’re not alone. Many observers have been there.