TQB: Teacher Quality Bulletin

Does state oversight of teacher prep make a difference?

See all posts

While evaluating teacher preparation programs is a resource-intensive process for state education agencies, new data shows those investments, when paying attention to the right things, may pay off.

A new study focused on Massachusetts finds a direct link between its performance rating of providers (which could be institutions that house multiple programs) with the future effectiveness of providers' graduates in classrooms—both in the evaluation ratings they earn in their new schools as well as their contributions to student learning.

Given this robust correlation and the fact that few states can point to such a positive link in their own processes, it's worth examining what the Massachusetts rating process looks like.

First off, Massachusetts is more interested in evaluating the evidence that providers are getting positive outcomes over trying to identify if a provider is adhering to a particular practice. For example, rather than emphasizing specific, predefined inputs providers need to achieve (e.g. entry requirements, hours in the field) as many states do, Massachusetts examines the impact of whatever practices the teacher prep providers choose to institute.

While the examination of instructional programming is part of the Commonwealth's review process, it is not the primary focus for decision making about ongoing approval. In other words, Massachusetts evaluators are more apt to drill down into organizational aspects (systems for continuous improvement, partnerships with school districts, quality of clinical experiences) as opposed to focusing on curriculum maps, faculty qualifications, or crosswalks that demonstrate how assessments are aligned to standards. That's not to say that Massachusetts views such pieces of evidence as unimportant, but accords them less weight than may be the case in many states.

Additionally, Massachusetts emphasizes the outcomes of new graduates. While many states attempt to collect survey data from principals on teacher readiness, Massachusetts adds to the mix employment data, the in-service evaluation ratings of teachers and—not to be ignored—student growth data. (Teachers whose student growth data was factored into the provider evaluation scores were not also included in the model of whether provider ratings predicted teacher effectiveness.) Notably, the state education agency, not the providers, assumes responsibility for collecting these outcome data.

The evaluation of Massachusetts' process found that two of the five domains it examines are especially predictive: "partnerships" (how responsive the provider is to the district's needs as measured by numerous indicators, such as the extent to which district partners make contributions that inform the prep provider's continuous improvement efforts or how well prep providers respond to district/school needs through focused recruitment, enrollment, retention, and employment) and "field-based experiences" (the strength of the student teaching experience as measured by numerous indicators, such as whether field-based experiences are embedded in provider coursework and are in settings with diverse learners).

Additionally, there was a nearly linear, positive relationship between how well future teachers perform on the basic skills test with their providers' rating, suggesting that a critical feature of stronger providers is the relative rigor of their admissions process. This finding, which attests to the significance of the basic skills test, should give pause to the ten states that recently dropped these tests at point of entry into the provider or made them optional.

This study does come with some caveats. First, the work identifies a correlation between providers and their graduates' effectiveness, but cannot say that the providers are the reason that those teachers are more effective. Second, this correlation may be picking up on a sorting effect rather than anything about the provider itself: teachers graduating from some providers tend to work in schools and districts with higher teacher evaluation ratings and greater student growth. Despite these concerns, Massachusetts' system points to a robust and more objective rating method designed to measure differences among providers.

This new research provides key guidance to states looking to improve their provider evaluation process, focusing more on the quality of providers' partnerships with school districts, the robustness of providers' clinical experiences, and outcomes of new graduates.