Evaluation of Effectiveness : Utah

2011 Identifying Effective Teachers Policy

Goal

The state should require instructional effectiveness to be the preponderant criterion of any teacher evaluation.

Meets in part
Suggested Citation:
National Council on Teacher Quality. (2011). Evaluation of Effectiveness : Utah results. State Teacher Policy Database. [Data set].
Retrieved from: https://www.nctq.org/yearbook/state/UT-Evaluation-of-Effectiveness--8

Analysis of Utah's policies

Utah does not require that objective evidence of student learning be the preponderant criterion of its teacher evaluations.

The state's policy requires local districts to conduct teacher evaluations. However, it requires that educator evaluation programs use multiple measures including self-evaluation, student and parent input, peer observation, supervisor observation, evidence of professional growth, student achievement data and other indicators of instructional improvement.

For teachers participating in Utah's "career ladder" program, an optional program in which teachers can earn additional income for taking on new responsibilities, the state requires that "student progress shall play a significant role in teacher evaluation."

In addition, Utah has recently adopted a new board rule that outlines criteria for teacher evaluation systems, which must incorporate "valid and reliable measuring tools," including observations of instructional quality, evidence of student growth, and parent and student input. 

Citation

Recommendations for Utah

Require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Although Utah is commended for requiring districts to use student achievement data in its teacher evaluations, it falls short by failing to require that evidence of student learning be the most significant criterion. The state should either require a common evaluation instrument in which evidence of student learning is the most significant criterion, or it should specifically require that student learning be the preponderant criterion in local evaluation processes. This can be accomplished by requiring objective evidence to count for at least half of the evaluation score or through other scoring mechanisms, such as a matrix, that ensure that nothing affects the overall score more. Whether state or locally developed, a teacher should not be able to receive a satisfactory rating if found ineffective in the classroom. 

Ensure that classroom observations specifically focus on and document the effectiveness of instruction.
Although Utah commendably requires classroom observations as part of teacher evaluations, the state should articulate guidelines that focus classroom observations on the quality of instruction, as measured by student time on task, student grasp or mastery of the lesson objective and efficient use of class time.

Utilize rating categories that meaningfully differentiate among various levels of teacher performance.
To ensure that the evaluation instrument accurately differentiates among levels of teacher performance, Utah should require districts to utilize multiple rating categories, such as highly effective, effective, needs improvement and ineffective. A binary system that merely categorizes teachers as satisfactory or unsatisfactory is inadequate.

State response to our analysis

Utah was helpful in providing NCTQ with facts that enhanced this analysis. The state added that by spring 2012, it will determine the percentages of these elements for a consistent measure. "A statewide evaluation framework with these elements, timelines and other processes must drive all evaluation systems." Utah also noted that a statewide model evaluation system for educators is being developed for adoption by districts during the 2012-2013 school year.

How we graded

Teachers should be judged primarily by their impact on students. 

While many factors should be considered in formally evaluating a teacher, nothing is more important than effectiveness in the classroom. Unfortunately, districts use many evaluation instruments, including some mandated by states that are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom. It is often enough that teachers appear to be trying, not that they are necessarily succeeding.

Many evaluation instruments give as much weight, or more, to factors that lack any direct correlation with student performance—for example, taking professional development courses, assuming extra duties such as sponsoring a club or mentoring and getting along well with colleagues. Some instruments hesitate to hold teachers accountable for student progress. Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.

A teacher evaluation instrument that focuses on student learning could include the following components:

A. Observation
  1. Ratings should be based on multiple observations ideally by multiple persons within the same year to produce a more accurate rating than is possible with a single observation. Teacher observers should be trained to use a valid and reliable observation protocol (meaning that it has been tested to ensure that the results are trustworthy and useful). The observers should assign degrees of proficiency to observed behaviors.
  2. The primary observation component should be the quality of instruction, as measured by student time on task, student grasp or mastery of the lesson objective and efficient use of class time.
  3. Other factors often considered in the course of an observation can provide useful information, including:
  • Questioning techniques and other methods for engaging class
  • Differentiation of instruction
  • Continual student checks for understanding throughout lesson
  • Appropriate lesson structure and pacing
  • Appropriate grouping structures
  • Reinforcement of student effort 
  • Classroom management and use of effective classroom routines
Other elements commonly found on many instruments, such as "makes appropriate and effective use of technology" and "ties lesson into previous and future learning experiences" may seem important but can be difficult to document reliably in an observation. Having too many elements can distract the observer from the central question: "Are students learning?"

B. Objective Measures of Student Learning

Apart from the observation, the evaluation instrument should provide evidence of work performance. Many districts use portfolios, which create a lot of work for the teacher and may be unreliable indicators of effectiveness. Good and less-cumbersome alternatives to the standard portfolio exist, for example:
  • The value that a teacher adds, as measured by standardized test scores
  • Periodic standardized diagnostic assessments
  • Benchmark assessments that show student growth
  • Artifacts of student work connected to specific student learning standards that are randomly selected for review by the principal or senior faculty and scored using rubrics and descriptors
  • Examples of typical assignments, assessed for their quality and rigor
  • Periodic checks on progress with the curriculum (e.g., progress on textbook) coupled with evidence of student mastery of the curriculum from quizzes, tests and exams

Research rationale

Reports strongly suggest that most current teacher evaluations are largely a meaningless process, failing to identify the strongest and weakest teachers. The New Teacher Project's report, "Teacher Hiring, Assignment and Transfer in Chicago Public Schools (CPS)" (July2007) at: http://www.tntp.org/files/TNTPAnalysis-Chicago.pdf, found that the CPS teacher performance evaluation system at that time did not distinguish strong performers and was ineffective at identifying poor performers and dismissing them from Chicago schools. See also Brian Jacobs and Lars Lefgren, "When Principals Rate Teachers," Education Next (Spring 2006). Similar findings were reported for a larger sample in The New Teacher Project's The Widget Effect (2009) at: http://widgeteffect.org/.  See also MET Project (2010). Learning about teaching: Initial findings from the measures of effective teaching project. Seattle, WA: Bill & Melinda Gates Foundation.

A Pacific Research Institute study found that in California, between 1990 and 1999, only 227 teacher dismissal cases reached the final phase of termination hearings. The authors write: "If all these cases occurred in one year, it would represent one-tenth of 1 percent of tenured teachers in the state. Yet, this number was spread out over an entire decade." In Los Angeles alone, over the same time period, only one teacher went through the dismissal process from start to finish. See Pamela A. Riley, et al., "Contract for Failure," Pacific Research Institute (2002).
That the vast majority of districts have no teachers deserving of an unsatisfactory rating does not seem to correlate with our knowledge of most professions that routinely have individuals in them who are not well suited to the job. Nor do these teacher ratings seem to correlate with school performance, suggesting teacher evaluations are not a meaningful measure of teacher effectiveness. For more information on the reliability of many evaluation systems, particularly the binary systems used by the vast majority of school districts, see S. Loeb et al, "Evaluating Teachers: The Important Role of Value-Added." The Brookings Brown Center Task Group on Teacher Quality (2010). 

There is growing evidence suggesting that standards-based teacher evaluations that include multiple measures of teacher effectiveness—both objective and subjective measures—correlate with teacher improvement and student achievement. For example see T. Kane et al, "Evaluating Teacher Effectiveness." Education Next Vol 11 No. 3 (2011); E. Taylor and J. Tyler, "The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-Career Teachers." National Bureau of Economic Research (2011); as well as Herbert G. Heneman III, et al., "CPRE Policy Brief: Standards-based Teacher Evaluation as a Foundation for Knowledge- and Skill-based Pay," Consortium for Policy Research, 2006.