2017 General Teacher Preparation Policy
The state should collect and publicly report key data on the quality of teacher preparation programs. This goal was reorganized in 2017.
Student Growth Data: Missouri does not collect or publicly report data that connect student growth to teacher preparation programs. The surveys administered to principals include a question where the principal rates the effectiveness of each new teacher on a scale of 1-4, "indicating the teachers' effectiveness in comparison to their performance-based evaluation," although completion of this survey does not appear to be mandatory.
Additional Program Data: Missouri collects other objective, meaningful data to measure the performance of traditional teacher preparation programs. The state requires programs to collect certification pass rates, GPA benchmarks, the number of times the candidate may take the content assessment, scores on the Missouri Pre-Service Teacher Assessment, and survey responses from first-year teachers and their principals.
Comprehensive Guide to the Annual Performance Report for Educator Preparation Programs https://mcds.dese.mo.gov/guidedinquiry/Educator%20Preparation/Comprehensive%20Guide%202016.pdf Top 10 by 20 Plan https://dese.mo.gov/sites/default/files/2016-17_Top10by20Plan.pdf Memo re: Implementation of the Missouri Standards for the Preparation of Educators https://dese.mo.gov/sites/default/files/Educator%20Preparation%20Memo%2010-20-2014.pdf 5 CSR 20-400.300
Collect data that connect student growth to teacher preparation programs, when those programs are large enough for the data to be meaningful and reliable.
Rather than relying on self-reported data from principals to gauge teachers' effects on student achievement, Missouri should consider collecting the academic achievement gains of students taught by programs' graduates, averaged over the first three years of teaching, when the programs produce enough graduates for those data to be meaningful and reliable. Data that are aggregated at the institution level (e.g., combining elementary and secondary programs), rather than disaggregated by the specific preparation program, have less utility for accountability and continuous improvement purposes than more specific data because institution-level data aggregation can mask significant differences in performance among programs.
Missouri was helpful in providing NCTQ with the facts necessary for this analysis.
The state added that some educator preparation programs have been able to access student growth data from individual school districts.
1C: Program Performance Measures
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data systems can be used to link teacher effectiveness to the teacher preparation programs from which they came. States should make such data, as well as other objective measures that go beyond licensure test pass rates, central components of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
National accrediting bodies, such as CAEP, are raising the bar, but are no substitute for states' own policy. A number of states now have somewhat more rigorous academic standards for admission by virtue of requiring that programs meet CAEP's accreditation standards. However, whether CAEP will uniformly uphold its standards (especially as they have already backtracked on the GPA requirement) and deny accreditation to programs that fall short of these admission requirements remains to be seen. Clear state policy would eliminate this uncertainty and send an unequivocal message to programs about the state's expectations.