The state's approval process for teacher preparation programs should hold programs accountable for the quality of the teachers they produce.
Massachusetts's approval process for its traditional and alternate route teacher preparation programs could do more to hold programs accountable for the quality of the teachers they produce.
Massachusetts now requires each organization seeking approval of its preparation program to provide evidence addressing educator effectiveness, which includes the analysis and use of aggregate evaluation ratings data of program completers, employment data on program completers employed in the state, results of survey data, and other available data to improve program effectiveness.
Regrettably, Massachusetts does not appear to apply any transparent, measurable criteria for conferring program approval. It gathers programs' annual summary licensure test pass rates (80 percent of program completers must pass their licensure exams), but the 80 percent pass-rate standard, while common among many states, sets the bar quite low and is not a meaningful measure of program performance.
The state will publish an annual report that includes the following information: single assessment and aggregate pass rates on licensing tests; survey data from candidates, program completers and district personnel; and aggregate evaluation ratings of program completers.
In Massachusetts, there is some overlap of accreditation and state approval. Members of NCATE/CAEP and the state make up the review team and decisions are made jointly; state members must complete NCATE/CAEP training. Massachusetts delegates its subject-matter program review process to NCATE/CAEP. Programs must align with NCATE/CAEP standards.
603 CMR 7.03 www.ncate.org
Collect data that connect student achievement gains to teacher preparation programs.
As one way to measure whether programs are producing effective classroom teachers, Massachusetts should consider the academic achievement gains of students taught by programs' graduates, averaged over the first three years of teaching. Data that are aggregated to the institution (e.g., combining elementary and secondary programs) rather than disaggregated to the specific preparation program are not useful for accountability purposes. Such aggregation can mask significant differences in performance among programs. Although Massachusetts has outlined its intentions to ensure that preparation programs are held accountable as part of Race to the Top, it is urged to codify these requirements and specify that they apply to alternate route programs as well as to traditional teacher preparation programs.
Gather other meaningful data that reflect program performance.
Although measures of student growth are an important indicator of program effectiveness, they cannot be the sole measure of program quality for several reasons, including the fact that many programs may have graduates whose students do not take standardized tests. The accountability system must therefore include other objective measures that show how well programs are preparing teachers for the classroom. Massachusetts should expand its requirements to also include such measures as:
1. Satisfaction ratings by school principals and teacher supervisors of programs' student teachers, using a standardized form to permit program comparison;
2. Average raw scores of teacher candidates on licensing tests, including academic proficiency, subject matter and professional knowledge tests;
3. Number of times, on average, it takes teacher candidates to pass licensing tests; and
4. Five-year retention rates of graduates in the teaching profession.
Establish the minimum standard of performance for each category of data.
Merely collecting the types of data described above is insufficient for accountability purposes. The next and perhaps more critical step is for the state to establish precise minimum standards for teacher preparation program performance for each category of data. Massachusetts should be mindful of setting rigorous standards for program performance, and programs should be held accountable for meeting rigorous standards, with consequences for those failing to do so, including loss of program approval.
Maintain full authority over the process for approving teacher preparation programs.
Massachusetts should ensure that it is the state that considers the evidence of program performance and makes the decision about whether programs should continue to be authorized to prepare teachers.
Massachusetts noted that it has recently published new Program Approval Guidelines and Preparation Program Profiles.
States need to hold programs accountable for the quality of their graduates.
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data can be used to provide objective evidence of the performance of teacher preparation programs. States should make such data, as well as other objective measures that go beyond licensure pass rates, a central component of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
Teacher Preparation Program Accountability: Supporting Research
For discussion of teacher preparation program approval see Andrew Rotherham and S. Mead's chapter "Back to the Future: The History and Politics of State Teacher Licensure and Certification." in A Qualified Teacher in Every Classroom. (Harvard Education Press, 2004).
For evidence of how weak state efforts to hold teacher preparation programs accountable are, see data on programs identified as low-performing in the U.S. Department of Education,The Secretary's Seventh Annual Report on Teacher Quality 2010 at: http://www2.ed.gov/about/reports/annual/teachprep/t2r7.pdf.
For additional discussion and research of how teacher education programs can add value to their teachers, see NCTQ's, Teacher Prep Review, available at http://www.nctq.org/p/edschools.
For a discussion of the lack of evidence that national accreditation status enhances teacher preparation programs' effectiveness, see D. Ballou and M. Podgursky, "Teacher Training and Licensure: A Layman's Guide," in Better Teachers, Better Schools, eds. Marci Kanstoroom and Chester E. Finn., Jr., (Washington, D.C.: Thomas B. Fordham Foundation, 1999), pp. 45-47. See also No Common Denominator: The Preparation of Elementary Teachers in Mathematics by America's Education Schools (NCTQ, 2008) and What Education Schools Aren't Teaching About Reading and What Elementary Teachers Aren't Learning (NCTQ, 2006).
See NCTQ, Alternative Certification Isn't Alternative (2007) regarding the dearth of accountability data states require of alternate route programs.