The state should require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Although the state requires student performance data to be a factor, Massachusetts does not require that objective evidence of student learning be the preponderant criterion of its teacher evaluations. The state requires districts to either adopt the model system or develop one of their own that is consistent with the state's framework.
Massachusetts requires its teacher evaluations to include "multiple measures of student learning, growth and achievement" as one category of evidence in teacher evaluations. The state defines these measures as student progress on classroom assessments that are aligned with the state's Curriculum Frameworks; student progress on learning goals; statewide growth measures, including the MCAS Student Growth Percentile and the Massachusetts English Proficiency Assessment (MEPA); and district-determined measures of student learning across grade or subject. Student feedback is also required.
The summative evaluation includes the evaluator's judgment of the teacher's performance against performance standards and the teacher's attainment of goals set forth in the teacher's plan. Four rating categories must be used: exemplary, proficient, needs improvement and unsatisfactory. To be rated proficient overall, teachers must at least be rated proficient on the "Curriculum, Planning and Assessment" and "Teaching All Students" standards.
In addition to the summative performance rating, an impact rating of high, moderate or low is also determined based on at least two state or districtwide measures of student learning: the MCAS Student Growth Percentile and the Massachusetts English Proficiency Assessment (MEPA), when available, as well as additional district-determined measures. The impact rating is discrete from the summative performance rating.
Classroom observations are required.
Require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Massachusetts falls short by failing to require that evidence of student learning be the most significant criterion. By keeping the impact measure wholly separate from the performance rating, it isn't clear that it is really a factor at all.
The state should either require a common evaluation instrument in which evidence of student learning is the most significant criterion, or it should specifically require that student learning be the preponderant criterion in local evaluation processes. This can be accomplished by requiring objective evidence to count for at least half of the evaluation score or through other scoring mechanisms, such as a matrix, that ensure that nothing affects the overall score more. Whether state or locally developed, a teacher should not be able to receive an effective rating if found ineffective in the classroom.
Ensure that evaluations also include classroom observations that specifically focus on and document the effectiveness of instruction.
Although Massachusetts requires classroom observations, the state should articulate guidelines that ensure that the observations focus on effectiveness of instruction. The primary component of a classroom observation should be the quality of instruction, as measured by student time on task, student grasp or mastery of the lesson objective and efficient use of class time.
Massachusetts asserted that its evaluation framework is more nuanced than the comments included in this analysis suggest. The framework consists of two separate, but linked, ratings: the summative performance rating and the student impact rating. Instructional effectiveness is at the center of both ratings.
Massachusetts pointed out that each educator is assigned a summative performance rating at the end of the five-step evaluation cycle. This rating assesses an educator's practice against four statewide Standards of Effective Teaching, as well as an educator's progress toward attainment of her or his student learning and professional practice goals. The evaluator classifies the teacher's “professional practice” into one of four performance levels: exemplary, proficient, needs improvement or unsatisfactory, and uses her or his professional judgment to determine this rating based on multiple categories of evidence related to the four standards and the educator’s progress toward meeting her or his goals. Evidence includes classroom observations and artifacts of instruction; multiple measures of student learning, growth and achievement; and student feedback. Instructional effectiveness plays a significant role in the summative performance rating in two ways. First, multiple measures of student learning, growth and achievement are a required source of evidence. An evaluator will review outcomes from student measures to make judgments about the effectiveness of the educator’s practice related to one or more of the four standards. Such evidence may be from classroom assessments, projects, portfolios, or district or state assessments. Second, evaluators must consider progress toward attainment of the educator’s student learning goal when determining the summative performance rating.
Massachusetts further noted that each educator is assigned a student impact rating, which is separate but complementary to the summative performance rating. This rating is informed by trends (at least two years) and patterns (at least two measures) in student learning, growth and achievement as measured by statewide growth measures (student growth percentiles, or SGPs), where available, and district-determined measures (DDMs). With the student impact rating, the evaluator applies her or his professional judgment and analyzes trends and patterns of student learning, growth and achievement to determine whether the educator’s impact on student learning is high, moderate or low. Each educator is matched with at least two measures each year to generate the data necessary for evaluators to determine student impact ratings. MCAS student growth percentiles must be used where available. Districts identify all other measures locally. Instructional effectiveness is a significant factor in the student impact rating, as the rating is wholly derived from the evaluator’s judgment of student outcomes from multiple measures of learning, growth and achievement.
Massachusetts contended that it eschews weights and algorithms in favor of professional judgment because that provides a more holistic and accurate understanding of educator effectiveness than systems that rely on formulas and “scores.” The state also believes that its framework guarantees, through its reliance on professional judgment, that educators who are not making expected levels of instructional impact will be identified and provided the targeted support necessary for rapid improvement. In cases where an educator is not able to improve quickly, the framework provides him or her with sufficient evidence to recommend dismissal. Massachusetts cautions NCTQ from advocating for evaluation systems that include weights or formulas that result in test scores or other measures of student outcomes comprising 50 percent of an educator’s rating without regard for the professional judgment of evaluators. As we have seen now in several states with such systems, the error introduced into an evaluation system when the underlying measures have significant year-to-year variation (error that is not equally distributed across all educators) can result in misaligned conclusions that are then collapsed into a single score. This misalignment crushes educator morale and shakes confidence in the evaluation process. The Massachusetts framework, with its two ratings, allows evaluators and educators to examine discrepancies in practice and impact, which is impossible in most systems.
Massachusetts added that its framework requires frequent observations to derive evidence of practice related to the standards, which are comprised of elements of practice that are largely instructional.
In its response, Massachusetts "cautions NCTQ from advocating for evaluation systems that include weights or formulas that result in test scores or other measures of student outcomes comprising 50 percent of an educator’s rating without regard for the professional judgment of evaluators." However, NCTQ does not advocate for a particular weight or formula when it comes to the incorporation of student growth into a teacher's evaluation score. By requiring student growth to be the preponderant criterion in local evaluation processes, however, states are able to ensure that teachers cannot receive an effective overall rating if found ineffective in the classroom. This is achieved by states in a variety of ways, many of which lack a particular weight or formula.
Further, NCTQ is not opposed to a two-rating system. Oklahoma recently passed legislation that now requires teachers to get two evaluation scores: a qualitative one and a quantitative one. Even though an overall summative score is no longer provided, the state maintains the significance of student growth by explicitly addressing the quantitative scores in its requirements regarding personnel decisions. For example, teachers must earn both qualitative and quantitative scores of superior for two of three years to earn tenure, and they are eligible for dismissal if they receive a qualitative or quantitative rating of ineffective for two consecutive years.
Value-added analysis connects student data to teacher data to measure achievement and performance.
Value-added models are an important tool for measuring student achievement and school effectiveness. These models measure individual students' learning gains, controlling for students' previous knowledge. They can also control for students' background characteristics. In the area of teacher quality, value-added models offer a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.
For example, at one time a school might have known only that its fifth-grade teacher, Mrs. Jones, consistently had students who did not score at grade level on standardized assessments of reading. With value-added analysis, the school can learn that Mrs. Jones' students were reading on a third-grade level when they entered her class, and that they were above a fourth-grade performance level at the end of the school year. While not yet reaching appropriate grade level, Mrs. Jones' students had made more than a year's progress in her class. Because of value-added data, the school can see that she is an effective teacher.Teachers should be judged primarily by their impact on students.
While many factors should be considered in formally evaluating a teacher, nothing is more important than effectiveness in the classroom.
Unfortunately, districts have used many evaluation instruments, including some mandated by states, that are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom. It is often enough that teachers appear to be trying, not that they are necessarily succeeding.
Many evaluation instruments give as much weight, or more, to factors that lack any direct correlation with student performance—for example, taking professional development courses, assuming extra duties such as sponsoring a club or mentoring and getting along well with colleagues. Some instruments hesitate to hold teachers accountable for student progress. Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.
Evaluation of Effectiveness: Supporting Research
Reports strongly suggest that most current teacher evaluations are largely a meaningless process, failing to identify the strongest and weakest teachers. The New Teacher Project's report, "Hiring, Assignment, and Transfer in Chicago Public Schools", July 2007 at: http://www.tntp.org/files/TNTPAnalysis-Chicago.pdf, found that the CPS teacher performance evaluation system at that time did not distinguish strong performers and was ineffective at identifying poor performers and dismissing them from Chicago schools. See also Lars Lefgren and Brian Jacobs, "When Principals Rate Teachers," Education Next, Volume 6, No. 2, Spring 2006, pp.59-69. Similar findings were reported for a larger sample in The New Teacher Project's The Widget Effect (2009) at: http://widgeteffect.org/. See also MET Project (2010). Learning about teaching: Initial findings from the measures of effective teaching project. Seattle, WA: Bill & Melinda Gates Foundation.
A Pacific Research Institute study found that in California, between 1990 and 1999, only 227 teacher dismissal cases reached the final phase of termination hearings. The authors write: "If all these cases occurred in one year, it would represent one-tenth of 1 percent of tenured teachers in the state. Yet, this number was spread out over an entire decade." In Los Angeles alone, over the same time period, only one teacher went through the dismissal process from start to finish. See Pamela A. Riley, et al., "Contract for Failure," Pacific Research Institute (2002).
That the vast majority of districts have no teachers deserving of an unsatisfactory rating does not seem to correlate with our knowledge of most professions that routinely have individuals in them who are not well suited to the job. Nor do these teacher ratings seem to correlate with school performance, suggesting teacher evaluations are not a meaningful measure of teacher effectiveness. For more information on the reliability of many evaluation systems, particularly the binary systems used by the vast majority of school districts, see S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, and G. Whitehurst, "Evaluating Teachers: The Important Role of Value-Added." The Brookings Brown Center Task Group on Teacher Quality, 2010.
There is growing evidence suggesting that standards-based teacher evaluations that include multiple measures of teacher effectiveness—both objective and subjective measures—correlate with teacher improvement and student achievement. For example see T. Kane, E. Taylor, J. Tyler, and A. Wooten, "Evaluating Teacher Effectiveness." Education Next, Volume 11, No. 3, Summer 2011, pp.55-60; E. Taylor and J. Tyler, "The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-Career Teachers." NBER Working Paper No. 16877, March 2011; as well as H. Heneman III, A. Milanowski, S. Kimball, and A. Odden, "CPRE Policy Brief: Standards-based Teacher Evaluation as a Foundation for Knowledge- and Skill-based Pay," Consortium for Policy Research, March 2006.