The state should collect and publicly report key data on the quality of teacher preparation programs. This goal was reorganized in 2017.
Student Growth Data: Maryland does not collect or publicly report data that connect student growth to teacher preparation programs.
Additional Program Data: Maryland collects other objective, meaningful data to measure the performance of alternate route teacher preparation programs. The state requires Maryland Approved Alternative Preparation Programs (MAAP) to submit an annual data report that includes principal satisfaction ratings, participants' satisfaction with the training and support received in the program, including their preparedness to teach upon completion; and data from intern supervisors and residency mentors. Maryland also requires that programs move up in their level of program development according to MAAP Guidelines, although the state does not specify any consequences for programs that fail to progress.
However, the state does not collect these data for its traditional teacher preparation programs and only collects programs' annual summary licensure test pass rates.
MD Institutional Performance Criteria http://marylandpublicschools.org/about/Documents/DEE/ProgramApproval/MAP/InstitutionalPerformanceCriteria_09032014.pdf MAAP Guidelines http://marylandpublicschools.org/about/Documents/DEE/ProgramApproval/MAAPP/Developmental%20Guidelines11109.pdf
Collect data that connect student growth to teacher preparation programs, when those programs are large enough for the data to be meaningful and reliable.
Maryland should consider collecting the academic achievement gains of students taught by programs' graduates, averaged over the first three years of teaching, when the programs produce enough graduates for those data to be meaningful and reliable. Data that are aggregated at the institution level (e.g., combining elementary and secondary programs), rather than disaggregated by the specific preparation program, have less utility for accountability and continuous improvement purposes than more specific data because institution-level data aggregation can mask significant differences in performance among programs.
Gather other meaningful data that reflect program performance.
Although measures of student growth are an important indicator of program effectiveness, the strongest state systems ensure that data are collected on multiple, objective program measures. Maryland should collect the same data it collects for alternate route programs for all educator preparation programs.
Maryland recognized the factual accuracy of this analysis. The state added that failure of an alternative program to meet standards also results in the same penalties as imposed on institutions of higher education (IHE) at the time of cyclical program review, ranging from focused revisits to probationary status to program shut down.
For more information on Maryland's articulated consequences for failure to meet minimum standards, see Goal 1-D: Program Reporting Requirements.
1C: Program Performance Measures
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data systems can be used to link teacher effectiveness to the teacher preparation programs from which they came. States should make such data, as well as other objective measures that go beyond licensure test pass rates, central components of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
National accrediting bodies, such as CAEP, are raising the bar, but are no substitute for states' own policy. A number of states now have somewhat more rigorous academic standards for admission by virtue of requiring that programs meet CAEP's accreditation standards. However, whether CAEP will uniformly uphold its standards (especially as they have already backtracked on the GPA requirement) and deny accreditation to programs that fall short of these admission requirements remains to be seen. Clear state policy would eliminate this uncertainty and send an unequivocal message to programs about the state's expectations.