2015 General Teacher Prep Programs Policy
The state's approval process for teacher preparation programs should hold programs accountable for the quality of the teachers they produce.
Maryland's approval process for traditional and alternate route teacher preparation programs does not hold programs accountable for the quality of the teachers they produce.
Most importantly, Maryland does not collect or report data that connect student achievement gains to teacher preparation programs. The state does collect some objective data to measure the performance of its alternate route programs. The state requires Maryland Approved Alternative Preparation Programs (MAAP) to submit an annual data report that includes principal satisfaction ratings (90 percent or higher is deemed to be as good as or better than other first-year teachers); participants' satisfaction with the training and support received in the program, including their preparedness to teach upon completion; and data from intern supervisors and residency mentors. Maryland also requires that programs move up in their level of program development according to MAAP Guidelines, although the state does not specify any consequences for programs that fail to progress.
However, the state does not collect this data for its traditional teacher preparation programs and only collects programs' annual summary licensure test pass rates (80 percent of program completers must pass their licensure exams). The 80 percent pass-rate standard, while common among many states, sets the bar quite low and is not a meaningful measure of program performance.
Further, in the past three years, no programs in the state have been identified as low performing—an additional indicator that programs lack accountability.
The state's website does not include a report card that allows the public to review and compare program performance.
Maryland requires institutions with 2,000 or more full-time equivalent students to receive and maintain national accreditation through CAEP in conjunction with state program approval.
MD Institutional Performance Criteria http://www.marylandpublicschools.org/MSDE/divisions/certification/progapproval/docs/InstitutionalPerformanceCriteria_09032014.pdf http://www.marylandpublicschools.org/MSDE/divisions/certification/progapproval/Program_Approval_Section.htm http://marylandpublicschools.org/MSDE/divisions/certification/progapproval/maapp.htm Title II State Reports https://title2.ed.gov www.caepnet.org
Collect data that connect student achievement gains to teacher preparation programs.
As one way to measure whether programs are producing effective classroom teachers, Maryland should consider the academic achievement gains of students taught by programs' graduates, averaged over the first three years of teaching. Data that are aggregated to the institution (e.g., combining elementary and secondary programs) rather than disaggregated to the specific preparation program are not useful for accountability purposes. Such aggregation can mask significant differences in performance among programs.The state is urged to codify these requirements and specify that they apply to alternate route programs as well as to traditional teacher preparation programs.
Gather other meaningful data that reflect program performance.
Although measures of student growth are an important indicator of program effectiveness, they cannot be the sole measure of program quality for several reasons, including the fact that many programs may have graduates whose students do not take standardized tests. The accountability system must therefore include other objective measures that show how well all programs are preparing teachers for the classroom, such as:
1. Evaluation results from the first and/or second year of teaching
2. Satisfaction ratings by school principals and teacher supervisors of programs' student teachers, using a standardized form to permit program comparison
3. Average raw scores of teacher candidates on licensing tests, including academic proficiency, subject matter and professional knowledge tests
4. Number of times, on average, it takes teacher candidates to pass licensing tests
5. Five-year retention rates of graduates in the teaching profession.
Establish the minimum standard of performance for each category of data.
Merely collecting the types of data described above is insufficient for accountability purposes. The next and perhaps more critical step is for the state to establish precise minimum standards for teacher preparation program performance for each category of data. Maryland should be mindful of setting rigorous standards for program performance, as its current requirement that 80 percent of program graduates pass the state's licensing tests is too low a bar. Programs should be held accountable for meeting rigorous standards, and there should be consequences for failing to do so, including loss of program approval.
Publish an annual report card on the state's website.
Maryland should codify policy requiring an annual report card that shows all data the state collects on individual teacher preparation programs to be published on the state's website at the program level for the sake of public transparency. Data should be presented in a manner that clearly conveys whether programs have met performance standards.
Maintain full authority over the process for approving teacher preparation programs.
Maryland should not cede its authority and must ensure that it is the state that considers the evidence of program performance, no matter the program size, and makes the decision about whether programs should continue to be authorized to prepare teachers.
Maryland asserted that it respectfully disagrees with the assertion that since no IHEs have been determined to be low performing, accountability measures must be weak. Maryland believes that assertion would demand that at least a few students in every PreK-12 classroom should fail just to maintain a bell curve or to demonstrate rigor. With regard to the collection of data linking teacher performance to origin of preparation, the state did agree that data should be significant in determining the effectiveness of the program. However, with only one year of data with which to work, leaving much too small a sample to discount the many variables such as cultural and social issues that emerge with new evaluation systems, teacher placement, varying supports, etc., it is too soon to begin this aspect of EPP evaluation.
In summary, Maryland indicated that it does collect this data but has not decided as an education community of practice how it will be used once two or three years of data are available. There continue to be issues of privacy and legal authority in the collecting and maintaining of scores or evaluations of any kind that can be attributed to individuals in disaggregated form.
Nothing in NCTQ's recommendations suggests the need for a bell curve, but the lack of a visible accountability system used by Maryland, coupled with the fact that the state has not identified any programs as low performing, raises an alarm that the state is not evaluating the quality of programs that train its prospective teachers. These programs are responsible to their teacher candidates, the districts being served and ultimately the students in the state for the level of preparedness of teacher graduates. While we know there is much-needed room for improvement in the quality of teacher preparation programs, the measures of accountability in Maryland and in most states do not reflect this reality.
NCTQ looks forward to reviewing the state's progress in its use of student achievement in teacher preparation program accountability in future editions of the Yearbook.
States need to hold
programs accountable for the quality of their graduates.
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data can be used to provide objective evidence of the performance of teacher preparation programs. States should make such data, as well as other objective measures that go beyond licensure pass rates, a central component of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
Teacher Preparation Program Accountability: Supporting Research
For discussion of teacher preparation program approval see Andrew Rotherham and S. Mead's chapter "Back to the Future: The History and Politics of State Teacher Licensure and Certification." in A Qualified Teacher in Every Classroom. (Harvard Education Press, 2004).
For evidence of how weak state efforts to hold teacher preparation programs accountable are, see data on programs identified as low-performing in the U.S. Department of Education,The Secretary's Seventh Annual Report on Teacher Quality 2010 at: http://www2.ed.gov/about/reports/annual/teachprep/t2r7.pdf.
For additional discussion and research of how teacher education programs can add value to their teachers, see NCTQ's, Teacher Prep Review, available at http://www.nctq.org/p/edschools.
For a discussion of the lack of evidence that national accreditation status enhances teacher preparation programs' effectiveness, see D. Ballou and M. Podgursky, "Teacher Training and Licensure: A Layman's Guide," in Better Teachers, Better Schools, eds. Marci Kanstoroom and Chester E. Finn., Jr., (Washington, D.C.: Thomas B. Fordham Foundation, 1999), pp. 45-47. See also No Common Denominator: The Preparation of Elementary Teachers in Mathematics by America's Education Schools(NCTQ, 2008) and What Education Schools Aren't Teaching About Reading and What Elementary Teachers Aren't Learning (NCTQ, 2006).
See NCTQ, Alternative Certification Isn't Alternative (2007) regarding the dearth of accountability data states require of alternate route programs.