2015 Identifying Effective Teachers Policy
The state should require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Ohio no longer requires that objective evidence of student learning be the preponderant criterion of its teacher evaluations. The state now requires student performance data to be a significant factor. Districts develop evaluation policy consistent with the state's framework.
Beginning with the 2015-2016 school year, districts in Ohio have two choices when it comes to evaluation frameworks. The first framework is based on 50 percent teacher performance and 50 percent student growth.
For teachers who instruct value-added subjects exclusively, the teacher-level value added is the full 50 percent. For teachers who instruct value-added courses but not exclusively, the teacher-level value added is proportionate to the teacher's schedule (10-50 percent), with LEA measures proportionately added as well (0-40 percent). For teachers with approved vendor assessment teacher-level data available, the vendor assessment (10-50 percent) is combined with LEA measures (0-40 percent) for a total of 50 percent. For teachers with no teacher-level value-added or approved vendor assessment data available, LEA measures such as student learning objectives count for 50 percent. The remaining 50 percent is comprised of a teacher-performance rating, which is comprised of a professional growth plan, observations and walkthroughs.
However, districts may also choose an alternative framework, which only requires the student academic growth measure to count for 35 percent. Teacher performance counts for 50 percent, with the remainder of the rating consisting of one, or any combination, of the following: student surveys, teacher self-evaluations, peer review evaluations or student portfolios.
Regardless of the framework, the following four-scale rating system must be used: accomplished, skilled, developing and ineffective.
Recent legislation has created a safe harbor for teachers with value-added ratings from state tests. Teachers will not use value-added results for evaluation until results from the state tests administered in the 2016-2017 school year are incorporated into the evaluation ratings in the spring of 2018. In the meantime, districts have three options: 1) enter into a memorandum of understanding to allow continued use of value-added results 2) use student growth measures other than value-added results or 3) rely on teacher performance measures to determine the overall rating.
Ohio Revised Code 3319.112, -.114 HB 64 (2015)
Require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Ohio's evaluation system now falls short by failing to require that evidence of student learning be the most significant criterion. The state should either require a common evaluation instrument in which evidence of student learning is the most significant criterion, or it should specifically require that student learning be the preponderant criterion in local evaluation processes. This can be accomplished by requiring objective evidence to count for at least half of the evaluation score or through other scoring mechanisms, such as a matrix, that ensure that nothing affects the overall score more. Whether state or locally developed, a teacher should not be able to receive an effective rating if found ineffective in the classroom.
Ohio was helpful in providing NCTQ with the facts necessary for this analysis.
Value-added analysis connects student data to teacher data to measure achievement and performance.
Value-added models are an important tool for measuring student achievement and school effectiveness. These models measure individual students' learning gains, controlling for students' previous knowledge. They can also control for students' background characteristics. In the area of teacher quality, value-added models offer a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.
For example, at one time a school might have known only that its fifth-grade teacher, Mrs. Jones, consistently had students who did not score at grade level on standardized assessments of reading. With value-added analysis, the school can learn that Mrs. Jones' students were reading on a third-grade level when they entered her class, and that they were above a fourth-grade performance level at the end of the school year. While not yet reaching appropriate grade level, Mrs. Jones' students had made more than a year's progress in her class. Because of value-added data, the school can see that she is an effective teacher.Teachers should be judged primarily by their impact on students.
While many factors should be considered in formally evaluating a teacher, nothing is more important than effectiveness in the classroom.
Unfortunately, districts have used many evaluation instruments, including some mandated by states, that are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom. It is often enough that teachers appear to be trying, not that they are necessarily succeeding.
Many evaluation instruments give as much weight, or more, to factors that lack any direct correlation with student performance—for example, taking professional development courses, assuming extra duties such as sponsoring a club or mentoring and getting along well with colleagues. Some instruments hesitate to hold teachers accountable for student progress. Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.
Evaluation of Effectiveness: Supporting Research
Reports strongly suggest that most current teacher evaluations are largely a meaningless process, failing to identify the strongest and weakest teachers. The New Teacher Project's report, "Hiring, Assignment, and Transfer in Chicago Public Schools", July 2007 at: http://www.tntp.org/files/TNTPAnalysis-Chicago.pdf, found that the CPS teacher performance evaluation system at that time did not distinguish strong performers and was ineffective at identifying poor performers and dismissing them from Chicago schools. See also Lars Lefgren and Brian Jacobs, "When Principals Rate Teachers," Education Next, Volume 6, No. 2, Spring 2006, pp.59-69. Similar findings were reported for a larger sample in The New Teacher Project's The Widget Effect (2009) at: http://widgeteffect.org/. See also MET Project (2010). Learning about teaching: Initial findings from the measures of effective teaching project. Seattle, WA: Bill & Melinda Gates Foundation.
A Pacific Research Institute study found that in California, between 1990 and 1999, only 227 teacher dismissal cases reached the final phase of termination hearings. The authors write: "If all these cases occurred in one year, it would represent one-tenth of 1 percent of tenured teachers in the state. Yet, this number was spread out over an entire decade." In Los Angeles alone, over the same time period, only one teacher went through the dismissal process from start to finish. See Pamela A. Riley, et al., "Contract for Failure," Pacific Research Institute (2002).
That the vast majority of districts have no teachers deserving of an unsatisfactory rating does not seem to correlate with our knowledge of most professions that routinely have individuals in them who are not well suited to the job. Nor do these teacher ratings seem to correlate with school performance, suggesting teacher evaluations are not a meaningful measure of teacher effectiveness. For more information on the reliability of many evaluation systems, particularly the binary systems used by the vast majority of school districts, see S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, and G. Whitehurst, "Evaluating Teachers: The Important Role of Value-Added." The Brookings Brown Center Task Group on Teacher Quality, 2010.
There is growing evidence suggesting that standards-based teacher evaluations that include multiple measures of teacher effectiveness—both objective and subjective measures—correlate with teacher improvement and student achievement. For example see T. Kane, E. Taylor, J. Tyler, and A. Wooten, "Evaluating Teacher Effectiveness." Education Next, Volume 11, No. 3, Summer 2011, pp.55-60; E. Taylor and J. Tyler, "The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-Career Teachers." NBER Working Paper No. 16877, March 2011; as well as H. Heneman III, A. Milanowski, S. Kimball, and A. Odden, "CPRE Policy Brief: Standards-based Teacher Evaluation as a Foundation for Knowledge- and Skill-based Pay," Consortium for Policy Research, March 2006.