The state should collect and publicly report key data on the quality of teacher preparation programs. This goal was reorganized in 2017.
Student growth data: Georgia no longer collects and reports uniform data on the performance and effectiveness of program graduates, as measured by student growth data, due to changes in state law that give districts flexibility in the manner in which student growth data factor into teacher evaluations.
Additional Program Data: Georgia collects other objective, meaningful data to measure the performance of teacher preparation programs. Through the state's Teacher Preparation Program Effectiveness Measures (TPPEM) system, programs are required to annually report a wide variety of metrics, including content knowledge assessment pass rates as well as scores based on the better of a candidate's first two attempts, content pedagogy assessment scores based on the best score of a candidate's first two attempts, teacher performance data from summative classroom observations, and completer and employer surveys.
GA Senate Bill 364 Georgia Rule 505-3-.02 http://www.gapsc.com/GaEducationReform/Downloads/PPEM_FAAQs_October_2013.pdf http://www.gapsc.com/GaEducationReform/PPEMs/PPEMs.aspx
Collect data that connect student growth to teacher preparation programs, when those programs are large enough for the data to be meaningful and reliable.
Despite new flexibility at the district level in the manner in which student growth counts toward a teacher's evaluation, Georgia should collect the academic achievement gains of students taught by programs' graduates, averaged over the first three years of teaching, when the programs produce enough graduates for those data to be meaningful and reliable. Data that are aggregated at the institution level (e.g., combining elementary and secondary programs), rather than disaggregated by the specific preparation program, have less utility for accountability and continuous improvement purposes than more specific data because institution-level data aggregation can mask significant differences in performance among programs.
Georgia was helpful in providing NCTQ with facts that enhanced this analysis. The state added that revisions to the TPPEM system are underway, and that Rule 505-3-.02 will be revised in the coming months.
Georgia reiterated that it collects data from all state-required certification assessments, including, as recommended by NCTQ, average scaled scores and pass rates. The state also added that it collects the number of attempts, and average first-time pass rates, as well as multiple other data points and variables. The state further provided that pass rate assessment data, at the overall and task levels, are shared with and analyzed by diverse and representative advisory groups, and with the Commission, to inform policy and practice. Data on performance assessments are also regularly shared at statewide meetings, including P-12 Human Resource Officers, educator preparation program Assessment Directors, Virtual Learning Community meetings, and others. Georgia added that each state-approved preparation program has full access to all certification assessment data at the examinee, program, and state levels.
1C: Program Performance Measures
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data systems can be used to link teacher effectiveness to the teacher preparation programs from which they came. States should make such data, as well as other objective measures that go beyond licensure test pass rates, central components of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
National accrediting bodies, such as CAEP, are raising the bar, but are no substitute for states' own policy. A number of states now have somewhat more rigorous academic standards for admission by virtue of requiring that programs meet CAEP's accreditation standards. However, whether CAEP will uniformly uphold its standards (especially as they have already backtracked on the GPA requirement) and deny accreditation to programs that fall short of these admission requirements remains to be seen. Clear state policy would eliminate this uncertainty and send an unequivocal message to programs about the state's expectations.