Delivering Well Prepared Teachers Policy
The state's approval process for teacher preparation programs should hold programs accountable for the quality of the teachers they produce.
Although Ohio is doing more than most states when it comes to holding programs accountable for the quality of the teachers they produce, the state's approval process for its traditional and alternate route teacher preparation programs leaves room for improvement.
Ohio collects some objective data that reflect program performance, including value-added data on the achievement gains of program graduates' students. The state reports these data for institutions of higher education on the state's website to inform the public with meaningful, readily understandable indicators of how well programs are doing. These data are disaggregated by certification area but not for alternate route programs. The state also collects the number of completers who, while teaching, have earned each evaluation rating, aggregated by program and year graduated.
Ohio has not established minimum performance standards for each category of data it collects that can be used for accountability purposes. Further, in the past three years, just one program in the state has been identified as low performing—an additional indicator that programs lack accountability.
Report cards of teacher preparation programs include the following: licensure test scores, value-added data (EVAAS), candidate academic measures, field/clinical experiences, preservice teacher candidate survey results, national accreditation, resident educator persistence data, teacher alumni survey data and excellence and innovation initiatives.
In Ohio, the state maintains full authority over teacher preparation program approval.
Ohio Administrative Code 3301-24-03 Ohio Revised Code 3319.111 Educator Accountability https://www.ohiohighered.org/educator-accountability Title II State Reports https://title2.ed.gov
Establish the minimum standard of performance for each category of data.
Ohio should establish precise minimum standards for teacher preparation program performance for each category of data. Programs should then be held accountable for meeting these standards, and there should be consequences for failing to do so, including loss of program approval.
Ohio was helpful in providing NCTQ with facts that enhanced this analysis.
The state also asserted that the statement about identifying one Educator Preparation Program (EPP) as low-performing is misleading, as there is no summative rating at this time for Ohio’s EPP Performance Reports. The "low performing” status of one Ohio EPP is in strict compliance with Title II USDOE Requirements for designating educator preparation programs as “Effective,” “At Risk of Low Performing,” or “Low Performing” based solely on licensure pass rates over a period of time. If the proposed USDOE requirements for rating EPPs move forward as drafted, the Chancellor of the Ohio Board of Regents will develop a summative rating calculated on a formula inclusive of all the reported metrics in the annual Ohio EPP Performance Reports.
The state also commented that it has not established baseline performance levels for each metric outlined in the Performance Report; the metric results inform the Ohio's cyclic review of programs for purposes of continuing approval.
States need to hold
programs accountable for the quality of their graduates.
The state should examine a number of factors when measuring the performance of and approving teacher preparation programs. Although the quality of both the subject-matter preparation and professional sequence is crucial, there are also additional measures that can provide the state and the public with meaningful, readily understandable indicators of how well programs are doing when it comes to preparing teachers to be successful in the classroom.
States have made great strides in building data systems with the capacity to provide evidence of teacher performance. These same data can be used to provide objective evidence of the performance of teacher preparation programs. States should make such data, as well as other objective measures that go beyond licensure pass rates, a central component of their teacher preparation program approval processes, and they should establish precise standards for performance that are more useful for accountability purposes.
Teacher Preparation Program Accountability: Supporting Research
For discussion of teacher preparation program approval see Andrew Rotherham and S. Mead's chapter "Back to the Future: The History and Politics of State Teacher Licensure and Certification." in A Qualified Teacher in Every Classroom. (Harvard Education Press, 2004).
For evidence of how weak state efforts to hold teacher preparation programs accountable are, see data on programs identified as low-performing in the U.S. Department of Education,The Secretary's Seventh Annual Report on Teacher Quality 2010 at: http://www2.ed.gov/about/reports/annual/teachprep/t2r7.pdf.
For additional discussion and research of how teacher education programs can add value to their teachers, see NCTQ's, Teacher Prep Review, available at http://www.nctq.org/p/edschools.
For a discussion of the lack of evidence that national accreditation status enhances teacher preparation programs' effectiveness, see D. Ballou and M. Podgursky, "Teacher Training and Licensure: A Layman's Guide," in Better Teachers, Better Schools, eds. Marci Kanstoroom and Chester E. Finn., Jr., (Washington, D.C.: Thomas B. Fordham Foundation, 1999), pp. 45-47. See also No Common Denominator: The Preparation of Elementary Teachers in Mathematics by America's Education Schools(NCTQ, 2008) and What Education Schools Aren't Teaching About Reading and What Elementary Teachers Aren't Learning (NCTQ, 2006).
See NCTQ, Alternative Certification Isn't Alternative (2007) regarding the dearth of accountability data states require of alternate route programs.