In today’s episode, we reconnect with Grant Van Eaton and Robin Greatrex from Teach for America, shining a spotlight on their dedicated efforts to advance teacher development and promote an equitable education system. In our discussion, Grant and Robin unpack the significant strides taken since their last visit and share the triumphs of their CLASS® system in enhancing teacher observations and coaching. 

This episode takes you behind the scenes of Teach for America's adoption of the CLASS observation tool, offering a candid look at the challenges and victories encountered along the way. Our guests highlight how the incorporation of this system into everyday practice has sparked a cultural shift, emphasizing data fluency and a unified approach to teaching quality. We explore the transformative journey led by administrators who, by becoming certified first, set a powerful example for their teams. 

Links

Follow Marnetta Larrimer on LinkedIn

Follow Grant Van Eaton on LinkedIn

Follow Robin Greatrex on LinkedIn

Visit Teach For America's website



Watch and Listen Now



Read The Transcript

Marnetta: Listeners, welcome back to another episode of Impacting the Classroom, the podcast that talks about big topics in education. I'm Marnetta Larimer. 

As always, you know I like to ask what's impacting education. Making true of their promise, I'm being joined today with folks from Teach For America who joined us last season. Welcome Grant and Robin. For the listeners who haven't got to know you last season, can you take a moment to introduce yourself and tell us what you do at Teach For America? Let's start with you, Grant.

Grant: Thanks so much, Marnetta. Hi everyone. My name is Grant Van Eaton, and I'm a Senior Researcher at Teach For America. I'm a former teacher, I helped train a lot of our teachers for many years, and now I have the amazing job where I get to think about how we continue to improve and make our program better. I work really closely with our program team. I have Robin here with me today, and I'm really excited to chat more. Robin, I'll toss it over to you.

Robin: Thanks Grant, and hi Marnetta. It's so fantastic to see you again. For you all listening, my name is Robin Greatrex and I am a Senior Managing Director of Programmatic Data Learning and Insights. That means I get the really fun job of leading all of our data learning and continuous improvement strategy for our core member and alumni educator programs here at Teach For America.

Like Grant, I was a former elementary school teacher, and I am just absolutely obsessed with teacher development, so I feel really lucky every day that I get to do this job.

Marnetta: Oh, and it shows in all the work that you've been doing. All my interactions with both of you just exude passion for the work. But our listeners will decide as we continue to talk with you about your work at Teach For America. 

For those of us who didn't catch you last season, can you give us a brief overview of what Teach For America is?

Grant: Sure, and I'm happy to start there, Marnetta. Teach For America, our primary mission is to find, develop, and recruit top talent for our classrooms. We are looking for people to come and join. 
We have two programs. One that's a little less known, our Ignite Fellowship, where we're actually working with current college students who are doing high-impact tutoring for middle and high school kids throughout the country, especially thinking about all the learning loss post-COVID. 
Then we have our marquee core member program where we place teachers in urban and rural classrooms throughout the country, to work to achieve an equitable education system for all students. 

Many of our teachers, our core members as we call them in those first two years, stay on and they take on that alumni badge and banner. Of our 65,000+ alumni at this point, about a third of them are still in school systems as teachers, as principals, as district and state leaders, and over 80 % of them are working in fields that impact education. 

We know that a high quality teacher is so important to students, but they also need access to high quality healthcare, and people that are working in policy and legal spaces to change the whole system to make it more equitable for all kids. Our primary goal and mission is to make sure that kids have an outstanding leader in front of them in classrooms, and that that experience then as their alumni drives them to change the system for kids moving forward.

Marnetta: Thank you for that, Grant. Robin, did you want to add anything to that?

Robin: No, I think Grant's summary is quite beautiful.

Marnetta: I know. It was wrapped up in a nice little bow. It almost makes me want to go join. I feel like I need to be there. The last time we spoke, we talked about Teach For America and your organization adopting CLASS. What have you been up to since then?

Robin: We have started implementation. When we were talking with you all last summer, we were gearing up for our first year of implementing CLASS across our entire network of regions and core members. Talking to you now, we have completed two observation cycles. 

We did one observation cycle in the fall at the beginning of the school year, then we have completed our winter observation cycle in the past few weeks, and are starting our third observation cycle now.
It's really exciting. This means that we have implemented CLASS across all of our pre-K through third grade core members across all of our regions, and that's around 500 teachers right now. We had our coaches observing their classrooms, scoring those observations, and then entering the data through myTeachstone. We've now rolled that up into our internal system reporting, so people can utilize this data to support continuous improvement with their core members, with their coaching cohorts, with their regional cohorts, and then broadly across our program. 

As part of that implementation, it means that we have trained over 120 coaches to be certified CLASS observers on the elementary measure. Right now we're sitting at about a 93% certification rate for those folks, and we continue to have more coaches trained as they come in and join staff, and come back from their various leaves.

We are very excited about what we've been able to accomplish in this year one, not only in terms of scaling up the certification of our coaches, but then also in our ability to implement observation windows across our system, and build out the reporting infrastructure in-house that is going to enable us to really utilize this data for continuous programmatic improvement.

Grant: If I could just brag on Robin, her team, and all of our coaches really, really quickly, and just double-click into some of that, this is a lot of infrastructure that we've been building since last summer when we talked with you all. 

Just to scale up to observe, I think it was hundreds of classes or 800 classrooms in our last observation cycle to have certification and to just all of the infrastructure that it takes to get everyone certified across the country, then to implement those observations with fidelity, be able to report and talk about them. 

I have to say, I have just been blown away and I have so much respect for all of the systems using CLASS right now. That was a huge lift to get here and to really overhaul. We talk about systems change, changing our own systems internally to do this. 

What I'm really proud about with our staff, too, is the journey they have been on over this last year to not only understand CLASS, deeply internalize it, but then understand what it means to use CLASS data for continuous improvement. We're already seeing that across a lot of our sites. 

We were just testing a new protocol we're going to be using over the last month to help with reflection, making meaning of the data, and driving action toward continuous improvement. We're seeing a huge increase in the number of classrooms being observed as we work out all the kinks with how to log into systems and how to record things, the unsexy things that no one ever talks about when they just want to see the data. 

I just want to send out a huge shout out to our amazing team that has gone through a lot of change management. We think about the data fluency that's required, to think about data in a new way, and to think about an observation system that they’ve never used before. 

I think we've been learning a ton over the last few months about how to do that, and that it's a journey. We're like step one. We're excited to share what we've been learning, but also with an eye toward we're just getting started, and we're really, really excited about where we're going to go from here.

Marnetta: I love that, and I love all the numbers. You could tell you all love data, throwing all the numbers out there. When you think about this journey—yes, it's just your beginning and this implementation of CLASS—how did you generate that buzz and that buy-in for your people to be so invested in going through all these changing systems with you?

Robin: I really think the proof for a lot of that is in the tool itself. We haven't had experience with a shared observational tool in Teach For America for over a decade. When people come into the CLASS training and they start getting deep into the dimensions, the domains, and the indicators, and they start understanding what class is looking at and how they're defining interactions and how they're defining the depth and complexity of interactions, they inherently see the connection to what they think of as high quality classrooms. 

I think in a lot of ways, the tool speaks for itself. The measure speaks for itself. I think as people were going through their three-day training, they were really experiencing, at least in that level, the alignment that they felt around their own beliefs about strong teaching and what great classrooms look like with the measure. 

As they went out and started to observe their core members and their teachers, their investment grew not only because they could see the measure in practice with the people that they're coaching, but because I think they started to realize how being a trained CLASS observer, a certified CLASS observer, changes the way they see classrooms. 

We've heard anecdotally from multiple coaches across our regional teams that this made them a more rigorous classroom observer, that they were able to have a more balanced view of what was happening in the classroom, that they were able to see things that were things that they didn't normally focus on in their pre-CLASS lives. 

I think the more they're using it, the more that they are getting fluent in the measure, fluent in the interaction, fluent in all of the pieces of it. That investment naturally grows because they are seeing how it's making them better coaches, which is then in turn helping them drive the improvement that they're seeing with their teachers.

Grant: What I would add to that, cosign all of that, I think the other thing we've tried to do—like we said, we're learning and growing here too so we're making adjustments as we go—is to make sure that the system around our coaches enables them to have the time and the space to do that, that they have the data readily at their fingertips to make and inform their next steps and decisions by writing it into their job descriptions, making sure that the system and their managers were there to support the implementation of it, that they had the data to be able to track their progress.

I think sometimes the measure alone isn't enough. As we think about implementation, it's the way less sexy things, like did we actually make time in your day for this to happen? Did we make this a formal part of your role responsibilities? Have we set expectations and given you the tools to say how am I going to make sense of this data? 

That was a hard change process for us. This wasn't part of the normal practice of our coaches. We didn't have the shared alignment, and that's what I've been so excited about with CLASS is the common language that it's given us, to be able to talk across all of our settings and to learn together. 

That felt really abstract as we were starting to implement it and like, why do I have to get certified? Why do I have to be reliable? Why is this important? While we had the on-paper answers for those things, what I've been really enjoying, especially the second administration cycle, is that people are gaining fluency in using it, actually putting that practice, using the data like, oh, I get it now. Just the importance of that lived experience of the tool, unsurprising. 

I feel like I almost want to do a class observation on our implementation of CLASS, like the way that it's been implemented. Is productivity enough? Do we have multiple formats for people to engage? What new language are we using together and how are we helping people develop new language? That has been a really fun part of the journey, too, to see as people put it into practice, how their beliefs, mindsets, and investment shift and change, because they see the impact that it's having for kids.

Marnetta: Yeah, that parallel process, it can help it, right? You see those effective interactions, and that's really what drives those outcomes. It's a natural tendency to lean into it with your everyday life and in your adult interactions, or your interactions with your children. That's what I found, at least in my world. 

I think one of the things that we didn't talk about with your implementation was the fact that, if I remember, you were talking about building these supports for your people as they were getting certified and going through all these processes. One of the biggest things that you did was your administrators, the people who were going to support them, had to do this first, correct?

Robin: We had the folks that manage our coaches take the training first. I think that really helped them understand what was going to be expected of their coaches, and also provided them with the same experiential foundation around the training, then to enable that continued use and to help build that investment.

Grant and I just spent all this time talking about how the measure speaks for itself, and then the enabling conditions around the measure as a way to build investment. But I think a lot of this also happened through our folks that manage and support our coaches, because they were able to build that initial investment themselves through that training. 

I think that this has been one of the key things about how we're doing this, is that it's really important for us that people are very clear about what the measure is and what the measure is not. By having folks go through that training, not just our coaches but other people, we were able to open up training slots for other people beyond just those coach managers. 

A lot of our regional leadership, some of our leadership from our fundraising and development teams, were also able to join those trainings as well and committed to the three-day training. Some of them got certified. 

This is allowing us and our system more broadly to support people in truly understanding what CLASS is, that it isn't just a tool that is about teacher practice. It's more than that because it's about interactions. It's about the kids in the classroom. It's about how they're interacting with each other and how they're interacting with adults. That helps us organizationally operate with a much higher level of nuance and understanding about what this data actually means. 

When our development folks go on to a call with a funder, they are able to say, this data is telling us this about the quality of what's happening in our classrooms. It inherently involves kids. It's not divorced from that. I think that's been really critically important. 

We know that one of the challenges that we continue to have is the difference in knowledge and investment level in some places of people who have gone through those training and people who have not is apparent in some places in the system. 

We're really thinking about, as we go into year two of implementation, beyond the tweaks, iterations, and refinements that we're making on our just day-to-day implementation plan of how we do observations, get the data recorded, et cetera, that we're also really thinking about how we continue to ensure that other people in the system really deeply understand the tool. 

Maybe they don't go through the full training, but we need to do some work with them to make sure beyond just a one-pager or a quick video, that they can interpret and make meaning of this data just as well as someone who has gone through the training.

Grant: A concrete example of how we've done that really recently that I think just speaks to the commitment of the organization necessary to do this but also to successful implementation, is we've really tried to center the coach as the primary user. As we've been designing our data dashboards, as we've been thinking about our meaning-making and sense-making protocols of how people are going to engage with the data after each observation window, the coach’s experience of that is our primary driver.

They're the user that we're testing it with. We're making sure to get their feedback on the report. Does it help you answer the questions you have quickly and easily? Is it guiding through action steps that you have, that you're going to take with your people? 

We've tried to weave into that other leaders from different parts of the organization, so our development teams, our external affairs teams—we had our president and COO at the last one—and building their fluency, not just with the tool and the data but in how the coach is using the data, so that when they see the data, when they're talking about it with people externally, when they're talking with district partners about how we're using the data and how we're improving, it's grounded in the experience that our coaches are having to use that to improve interactions in classrooms, not in some 10,000-foot altitude one-pager, that's what Robin said. 

I think that's another way in which we're trying to build fluency at all levels of the organization, and that that fluency be grounded in how our coaches use this to drive more positive interactions in classrooms for kids and the communities in which we serve.

Marnetta: I'm loving all of the pieces that you're sharing with our listeners who are getting ready to adopt CLASS and may need some help or assistance with some implementation steps, things that are working, and how you're building that infrastructure for success in your organization. I appreciate all that you've added there. 

You mentioned that you've already completed a couple of cycles of observations. What were the results? I know you probably don't want to give me digits and things, but what were the results and what were some aha or key takeaways from those results?

Grant: I'm not going to give you any numbers. That's partly because we haven't focused on the numbers. The numbers aren't what's important to us. Robin and I just like to give me a little nerdy geek moment. What we care about more than anything, and based on the research that your team has shared with us and we've done in deep dives with you all, we know that focusing on improvement over performance and accountability is what's going to drive changes in classrooms for kids. 

Our laser focus from the get-go has been how do we de-center the numbers, but instead help us focus on shifts in distributions and changes over time. We actually spent a ton of time reimagining what reports would look like, getting away from heat maps and targets, did you get to this quantitative number? And we actually designed a reporting suite around area plots and joy plots. If you don't know a joy plot, Google it, it's super cool. It's essentially overlaid histograms that we're looking at.

When we just got the data, while you can see where the average score is in your traditional bar graph, we have cutoff lines to show you where the transition from low to medium and medium to high is, so that you can place it in context. We have bars for the enterprise average. As you filter by regions and cohorts, you can see where you are relative to the scores for the enterprise. 
We're not putting that you had a 4 .7 or you had a 5.0. Part of that is to avoid rater bias. We know the second you say, oh, our goal now is for every classroom to have a five, magically a lot of fives are going to start to appear. So we're trying to de-incentivize that. But we also know that that's not what's going to make the classroom better for kids. There's nothing magical about a five. 
What is magical is when whatever our baseline was shifts. We've created overlaying histograms of the distribution. And the beauty of this is that we saw all three domains shift from time one to time two. We were thrilled beyond belief about that. 

The other thing that we were assuming would happen but it's always great to see the data, our second year teachers have higher CLASS scores than our first year teachers. We're seeing that improvement over time, so that's good. As we started to do some statistical analysis on the back end of it, there were no gaps by subgroups, by gender, by race, class, ethnicity. 

As we looked at our teachers, equity is also really important to us and is the center of our implementation here. We really wanted to monitor and make sure that there weren't differences in experience or systematic gaps that we're showing based on subgroups, so that if so, we could target and address that, and make sure that everyone was having an equitable experience of the tool. 
We're really thrilled that that didn't show up in the data. Of course, we've only had two time points, so we're going to keep monitoring that. But this shift toward thinking in distributions and thinking like statisticians as opposed to our linear algebra target approach hasn't been easy. 

Let me be really clear that that is not our default. It's taken some unlearning as a staff, and we've gotten a lot of, well, but I just want to see the numbers. Can't you just show me the numbers? We have been really intentional about how we walk people through that, how we set them up to look at it, how we design the reports to walk them through it.

To answer your question, we did see a shift. The shift was positive, and it was really exciting. There's also a lot of variation in the data. As regions dive into their individual data, there weren't always positive shifts. We got into some really cool discussions about bimodal distributions, where they were like, oh, if I had just seen the average, I would have thought things were okay. But now I see I have a really high-performing group and a really low-performing group, and I need to target that group. 

It shifted the type of thinking and strategic planning that we're doing as a result, how we visualize where teachers are in different interactions in their classroom, and that's been really, really fun to see. I think it's going to have a huge impact as we continue to develop our fluency there and really center how we are shifting distributions over time, not how we are getting the higher scores each window.

Robin: Just to build a little bit off of what Grant's sharing—and thank you, Grant, for giving such a really amazing summary of where we are and where we are in this data journey—I think one of the other hopes that we had had for what implementing CLASS in our system would change is the conversation about all the other stuff we do too. And we're seeing that in these meaning-making sessions that we were piloting the past couple of weeks. 

In every conversation I was in, we're looking at the data. We were trying to get under the variance, trying to get under the changes over time, and what we think was happening. We were having these robust conversations with indicator definitions and CLASS manuals out, and all of us really digging in to say, what does this really mean? What do we think is under that? 

But then it inevitably led to conversations of what does this mean for our partnerships with schools? And coaches who are also responsible for holding those partnerships with our individual campuses saying, I'm really excited to use this to enable a conversation with a school leader, with a principal about their disciplinary practices, about the policies in the school. 

In particular, around regard for child perspectives and movement and child comfort, some people saying, well, we think scores here are a little low because our schools have policies that prevent kids from being able to move around the classroom. Of course, our teachers are then doing that, and that's why the score is what it is. This now gives me an opportunity, some data, and some definitions to take to that school leader and have a conversation with them about that policy. 

For me, this is amazing. This is exactly what we would hope to be true where we can utilize the shared language that is rigorous, that is research-backed, that we know is connected to the outcomes that we want to see for kids, and this is enabling our people to then move that systems change work forward beyond individual teacher classrooms into the policies of schools, the practices of schools, the conversations that we're having with school leaders. I think those are some of the results that we're also seeing initially that are as worthwhile as what the actual numerical data is telling us.

Marnetta: That is beautiful. That just makes my heart warm because that's what you want. That's the goal. The power of CLASS, I always love to hear these stories. Success stories always make me happy. You said that there was this shift and this improvement that happened from the first cycle round of observations to the second one. What are you providing your educators to support that shift? What do those supports look like?

Robin: Our teachers have a coach, a dedicated coach, a leadership coach from Teach For America that is supporting them in developing their teacher leadership. That's the way we talk about it internally at Teach For America, because at the end of the day, we are not strictly a teacher development organization. We are a leadership development organization. As Grant mentioned earlier, we see the success of our mission towards educational equity, not only resting on our first and second year teachers that we have the most direct contact with, but also in our alumni base. 
In the teaching part of it, in the time that people are core members with us, that leadership development effort focuses on their teaching. Their coach is their teacher leadership development coach, who provides them with not only support connected to what they're observing in classrooms—they go in and they observe—but then they provide support around how that person is developing their leadership writ large, and helps them make connections to the broader ecosystem of support that that core member has access to. 

So helping them make a connection between the goals that they have for their kids, the goals that they have for themselves, and the actions they might take to get to those goals. There's a broad set of tools that that coach has available to them to provide to the core members to support their improvement. 

What has happened is we have integrated CLASS into that set of tools and into that coaching approach where it is one of the key components of our coaching approach at Teach For America, that the coaching that's happening should be data-informed. Our coaches are obviously doing the observations themselves, so they have the full data that they have from doing that observation. 
They are also utilizing CLASS scores to isolate areas of strength and areas of development, pull those into the coaching conversations directly with core members, and align that with the other pieces of evidence that they're seeing, student academic achievement, other data that core members may be collecting from their classroom, the places that core members themselves are saying are areas of their strength, they are areas of their development. 

Our coach is really serving as that person that is synthesizing and triangulating, and then making some prioritization decisions in that coaching conversation with core members. What we've done is we've aligned the CLASS observation windows to the three major coaching conversations that happen throughout the year—beginning of the year, middle of the year, and end of year. 
Now, coaches obviously have touch points with the core members in-between those pillar observations, but our CLASS observation windows are tied to those conversations. So our coaches are bringing that data in to inform the coaching work that they're doing with their core members directly.

Grant: And Marnetta, just thinking about the scores, like I said, not only are we trying to focus on how our staff use a CLASS in a continuous improvement lens, we're trying to continually improve on how we're implementing it and how we're learning from that. 

From the research side, we're actually really engaged with a number of research partners around, what’s the best way to share CLASS data and CLASS scores with our teachers? Do they do or is it more impactful to share holistic feedback? Is it better to share ranges—low, medium, high? Do they respond well to the quantitative data or does it shut them down? 

We actually have a number of proposals out right now and are hoping to launch a few studies in the fall around feedback and coaching cycles, because just like we know teachers are the largest driver of impact for kids that that's under the control of the school, the literature also tells us for teacher professional learning that coach interaction is super important. And we know that's the highest leverage point to help improve those interactions in their classrooms. 

So we're really invested in systematically randomized control trial level investigations into what's the best way to provide scores? What's the best way to provide CLASS feedback so that it has as much impact as possible for educators and for kids? And we're really excited to dive a little more systematically into that over the next year.

Marnetta: That's exciting, really exciting. That's one of the things that you have coming up next. What are some other next steps for TFA?

Robin: As Grant suggested, we're continuing to iterate and refine our broad implementation strategy about how to do CLASS at Teach For America, so that's ongoing work. But the really exciting big, huge next step forward that we're taking is that we are scaling up to secondary next year. Next year, school year 2024–2025, every single corps member for Teach For America, regardless of where they're teaching, will be observed using a CLASS measure, a CLASS observation tool. 

Right now, as Grant said, we're around 800 core members in early childhood and lower elementary. We skew our placement secondary, so the biggest group of core members that we have has yet to be observed and that's all going to change. We are really, really excited. This is going to be a very big step forward, a very large scale up, because this means that all of our coaches will be dual certified. They will be renewing their elementary certification and adding in a secondary certification. 

We just know logistically that there are going to be some growing pains around scheduling observations, making sure that those observations are scheduled during times that get us a robust indicator of what's happening in the classroom, et cetera. So we're very, very excited. There's a lot of work that we have to do internally between now and starting really in July for our earliest start regions, how to scale that up to secondary. 

Just to clarify, right now we are doing our pre-K through fourth grade core members, and next year we will be doing literally everyone in the core regardless of their placement using the two different scoring tools. So we're really, really excited and a little daunted if I'm being honest. I don't know about you, Grant.

Grant: I think something that's really exciting—I'll stay on the excited thread—another thing that's really exciting is that we also, for a number of our 5th through 12th grade core members teaching 5th through 12th grade, do student perception surveys and get student voice feedback on the classrooms. This year, since we were just doing CLASS in the primary grades, we didn't have an overlap of those two data sets. But starting the next year, we'll have both CLASS scores and the Cultivate Student Survey data for the same classrooms. 

Another thing we're really excited about moving forward is having those data overlap, being able to compare what students are saying with what observers are seeing and the interactions in the classroom, and get into rich conversations about, where does that align? Where are students giving the feedback? How could your interaction shift in order to address that feedback? So that's super exciting.

The other data nerd thing that I'm excited about, we mentioned we've been user testing these, we call them sense making or meaning making protocols. We've done a number of pilot tests with them, but those are going to roll out systematically across the organization next year. 

After every CLASS observation window and cultivate survey administration, our coaches are getting together with their hubs, with their peers and saying, here's what I'm seeing in the data. Here's what I'm seeing in my cohort. Here's what I'm seeing as a hub. How are we going to strategically adjust the support we're providing over the coming weeks and months in order to address these? 

We'll also be able to roll that up at the enterprise level and think of it as an enterprise. How are we responding to those and making strategic shifts year over year and the resources we're allocating to different places? To have that common language through CLASS to even be able to do that is a massive win. 

I think this year has been laying the foundation to teach everyone that language, get our hands wet in using it and implementing it. And next year, I think it's going to be a huge burst of how we learn as a system and learn at scale from this data that we're collecting. 

The last thing Robin, I, anyone at Teach For America wants is for this to feel like a compliance task that we're just going in to get the data, to satisfy some compliance task, and they don't actually generate improvement. I'm glad that a lot of those improvement and continuous improvement components are going to be going online at scale next year, to really systematize the way we are improving as a system based on the really rigorous data that we're collecting.

Marnetta: Wonderful. For our audience who may not know what secondary covers, what age can you share with them? What age groups, age levels that is?

Robin: This will be all of our 4th grade through 12th grade place core members. Any Teach For America core member, regardless of grade placement, regardless of content area, regardless of whether they are in a special education placement where they are co-teaching or pulling kids out, doesn't matter. They will have an observation based in CLASS, and we will have a score on what's happening with the interactions in their classroom.

To Grant's point, this is going to be a major engine that's then driving organizational level, collaborative, continuous improvement, where we can really tailor around trends that we're seeing org-wide, trends we're seeing in various cohort groups, down the many layers that Teach For America operates in, because we will have that holistic data for literally everyone using the shared language, using a shared tool.

Marnetta: Perfect. So from pre-K all the way up through 12th grade.

Robin: That’s right.

Grant: The reason why that happens is because around fourth or fifth grade, depending on the system, classrooms move from what we would call a self-contained classroom where there's one teacher teaching all the subjects, to they're starting to move between classrooms. The structure of the class changes, the length of the segments starts to change. 

This goes back to the implementation. We wanted to start with a smaller group. Only about 25% of our core members teach in those primary grades and the other 75% are secondary, so that was a little easier of an entree point. And now that we have comfort and familiarity with that tool, really expanding it to a much larger audience, but because of the structure of the classrooms have changed a little bit, and especially when you get into like the instructional support domains, those definitions change a little bit as the developmental age of the students change, is why we have that fourth or fifth grade break between the two tools.

Marnetta: Perfect, thank you for that clarification, because I was going to tease that out for an understanding. So I appreciate you proactively doing that for me, so I didn't have to ask. 
We've talked a lot about your mission. One of the things you said was equity is important to us. As great as this organization is, I'm sure we have some listeners who are wondering about our partnership, knowing who we are as an organization, who you are as an organization, and some of the concerns about your approach to taking teachers on to your program. I want to talk about that some. 

Despite evidence that shows that these teachers are quite effective, educator prep placement programs such as Teach For America stir up some controversy. They have over the years as people have questioned if it really is truly equitable, considering that the students in disadvantaged areas are given inexperienced teachers. What I'd like to talk about is how does Teach For America work to bridge those gaps?

Grant: We have heard those concerns, and I think we've made a lot of adjustments in response to those as well. I feel like a lot of people's archetype of Teach For America is that we're taking a lot of recent grads, mostly white female grads from top tier institutions and dispersing them across the country into low income communities. 

That was for a long time the reality in a lot of places. Especially in the last 10 years, we made a strong shift toward ensuring that the core members that we're selecting mirror and represent the population that we're serving. 

More than half of our core members identify as BIPOC, more than two-thirds identify as either BIPOC or coming from a low income community themselves. We've made a really intentional shift on that front in our recruitment. 

We've also shifted from a national recruitment model where you get sent anywhere in the country, to a very local recruitment model where we're recruiting people from the communities where they teach to teach in that community, and have really worked in a lot of policy spaces to try to open up routes for people to become teachers, especially in our rural areas, who might not have the degree credential that the licensing system might say they need but actually bring a ton of wealth and resources and knowledge about the community to be strong teachers, especially in many of our indigenous and rural areas, where that can be a barrier to someone who would be a phenomenal teacher working with their students. Those are a number of recruitment structural level shifts that we've made. 

To your point, we monitor our impact. There's been a ton of research on TFA's impact that shows that our teachers have the same or an outsized impact in their classrooms, and that is deeply important to us that our teachers are always growing. 

That is honestly why we came to CLASS, why we came to having a structured measure, because we wanted to be sure that we were not waiting on lagged student achievement data in order to know our impact two years after the fact. We wanted to be able to make changes in the moment in response to data that we know has predictive validity for not only students' academic outcomes, but also their social emotional outcomes and development. 

In the way that we're designing our measurement ecosystem, in the way that we are recruiting our people and supporting them throughout the time, the way we are trying to place our teachers, cluster them more together, and provide supports to help them integrate and support one another in the communities in which they're serving and leverage each other's expertise, working with alumni who are still teaching in those communities, know the community well, and have those funds of knowledge for the communities in which they're in, are a number of shifts that we've been making to ensure that our teachers are having not only an outsized impact on their students' education, but are integrated members of the community who are helping drive the needs and goals of that community, and what they want to be true for their kids moving forward.

Marnetta: And that's why this is such a great partnership. We are aligned in those goals. Robin, did you want to add anything to what Grant said?

Robin: One of the other things I would add is that I think some of the realities of the inequitable education system that we have in this country make it difficult for any new teacher to be really successful with kids. We see this larger issue with teacher retention, which I know is what we talked about last time in teacher shortages. That's a larger piece of this.

As Grant mentioned, we have a very high percentage of our alumni that stay in education and that stay in the work, and an even larger percentage of alumni that stay in the broader systems change effort that we know is required. We know we are not going to get to equitable education systems just in classrooms. That's a critical piece of it, but it requires this broader systems change work. There's a really important part of Teach For America's mission that is that we want to do amazing things with kids in classrooms with our core members, and there's also work that spans beyond that. 

I think one of the critical things about this partnership with you all is that utilizing CLASS at Teach For America puts us in conversation with other school systems. It puts us in conversation with the broader field. It allows us to do these things that can maybe reposition not Teach For America is doing the work alone, but Teach For America is doing the work in coalition with others that have the same mission to create equitable education systems. 

As someone that has worked in both traditional teacher prep and teacher prep here at Teach For America, what I see is that we have more similar problems across traditional and non-traditional teacher prep support and development than we have differences. I'm really driven by this idea that we should really be working together to solve the same problems that we have around teacher professional learning, supporting quality classrooms, as opposed to seeing our problems as separate and apart.

Marnetta: That was also the draw for the second edition CLASS for you, right?

Grant: Yeah. As we were looking at tools, and I think we talked about this more last time, we did this whole extensive process to determine what tool would work best in our system and bring in stakeholders. But a key draw right at that moment, you all were starting to launch the second edition and its specific focus on equity and diversity, equity and inclusion. It felt so spiritually aligned. 
To Robin's point, we didn't want to design another boutique measure. We wanted to be able to be in conversation with schools and systems, something that had really strong research backing. To see that that was a priority for TeachStone and that they were making intentional shifts there in order to respond to feedback and to be more responsive in classrooms, we're like, yes, this makes sense. This is exactly where we want to be, too. We want to be a part of this. 

That shift to the second edition was also a huge draw for us in not only the quality of the tool, but also in signaling the spiritual alignment of the partnership. I think that's borne so much fruit over the last year in the way we're able to work together and communicate because of our alignment and what we're trying to achieve together.

Marnetta: I have one more question before we go and thank you for going down that road with me. I think it was important for people to understand why this is such a beautiful partnership, how we're aligned in the same goals in our communities and in the profession in which we work. 
I would like for each of you to give advice to other programs. What advice would you give as they begin implementing CLASS? What would you tell them that would be a key thing that they would need to do? We can start with you, Grant.

Grant: I would say ground your staff, your teachers, and the communities that are going to implement this in, in the vision of what is possible and the why. 

Measures understandably and deservedly get a bad rap. We have used quantitative data in some really damaging ways. We, being the global society, education policy in this country over the last 10 or 20 years, can be a hard sell. And there's a lot of mistrust that is well earned around measures. To ground people in the vision of what this is going to enable to make a better future for our students is where I would start. From that, all the other things will flow.

It's going to be hard. It's going to be bumpy. Things are going to come up that you didn't expect if you don't have that shared commitment to a vision of what is possible with those data. I think that is what has been really helpful for us and what we come back to when it gets really hard and challenging, is why are we doing this and why are we doing this for our kids and communities? What was the need we were trying to solve for? 

That would be the advice that I would give is really make sure you're able to give that 30-, 60-second elevator pitch on the vision of what you want to be true for all kids as a result of your implementation.

Marnetta: For somebody who didn't know what they were going to say, that came out great. Robin, what would you say?

Robin: I would agree with Grant. I'm going to say a second thing and we'll see if this was Grant's second thing that he would say if he was given two pieces of advice. I think once you have that vision and that clarity about your purpose and your why, to design the technical and adaptive changes, and systems and structures around that why, because that's going to be what eventually drives you to that why. 

For a lot of reasons, we had to do the technical first and then have the adaptive follow along. I really wish we'd been in the position to have both of those things happen at the same time. There are things that we are doing to make sure that we are leading towards that why.

Grant mentioned some of these. We aren't putting a lot of numbers on our data reporting right now because we don't want people to fall back into old habits around just give me a number, just give me a number. We are trying to create meaning-making experiences that are long. The ones that we're piloting are eight hours spread across two days. We want to do that because we want people to have the time to slow down, really think about what the data are saying, and really do some root cause analysis, as opposed to just seeing a number and jumping to action. 

I think my advice is to really think carefully about the technical and adaptive supports that need to be in place to get your system to actually follow through, getting to that why that you have of implementing CLASS to begin with, and putting in the safeguards and the structures in place to help you get there. 

I guess maybe a good metaphor is Grant saying to lay out the boardwalk. I am saying that in the places where you know that boardwalk is going to go over some water, maybe putting up a guardrail so that you're not going to fall in the water as you go across that boardwalk. Those two things working in partnership to get you to where you want to go.

Marnetta: Thank you so much. Grant, Robin, again, thank you so much for sharing your big work that you're doing at Teach For America. I'm so interested, and I'm sure the listeners are going to be interested, in hearing from all of you again as you start to get your long-term data from your project. You know you're going to come back to see me.

Grant: We are always here for you, Marnetta. This has been a ton of fun. Thank you for having us.

Robin: Thank you so much.

Marnetta: Listeners, you can find today's episode on our website at teachstone.com/podcast. There you can find the audio and video versions as well as the full transcript of our conversation. And of course, behind great leading and teaching are powerful interactions. Let's build that culture together.