Identifying Effective Teachers Policy
The state should require instructional effectiveness to be the preponderant criterion of any teacher evaluation.
Commendably, Tennessee requires that objective evidence of student learning be the preponderant criterion of its teacher evaluations. The state provides a model, the Tennessee Educator Acceleration Model (TEAM), but districts may develop their own systems consistent with the state framework. Approval is required.
The state requires that 50 percent of evaluations be based on student achievement data. Recent legislation adjusts the current weighting of student growth data in a teacher's evaluation to lessen the evaluation score impact of new assessments (called TNReady). By 2017-2018, these new assessments will count for 35 percent of the evaluation score. The remaining 15 percent must be based on other measures of student achievement.
Teachers with TVAAS who teach grades 4-8 may choose among the following achievement measures: state assessments, schoolwide TVAAS, ACT/SAT suite of assessments, "off the shelf" assessments, AP/IB/NIC suites of assessments and graduation rates.
Recent legislation also adjusted the weights for those teaching nontested grades and subjects. Currently, 30 percent of the evaluation is comprised of student achievement data, with 15 percent based on growth as represented by TVAAS. The remainder of the score is based on observations.
For each evaluation, the person being evaluated must agree with the person conducting the evaluation on which measures are employed. If the teacher or principal being evaluated does not agree with the measures used, then the person responsible for conducting the evaluation will choose the evaluation measures.
Teachers must be rated using the following multiple rating categories: significantly below expectations, below expectations, at expectations, above expectations and significantly above expectations.
Classroom observations are required.
SB 119 (2015) Teacher and Principal Evaluation Policy 5.201 http://tn.gov/sbe/Policies/5.201_TeacherandPrincipalEvaluationPolicy_1-30-2015.pdf
As a result of Tennessee’s strong evaluation of effectiveness policies, no recommendations are provided.
Tennessee recognized the factual accuracy of this analysis. The state added that educators who teach fine arts, physical education or world languages may utilize the portfolio model as part of their effectiveness rating. The portfolio will constitute 35 percent of the rating, while the evaluation constitutes 50 percent and their selected achievement measure constitutes 15 percent.
Tennessee also noted that if there is a disagreement on the most appropriate measure for a teacher to use as his or her 15 percent achievement measure, the person being evaluated may make the final selection. The evaluator no longer has final decision-making powers in cases where there is disagreement.
Value-added analysis connects student data to teacher data to measure achievement and performance.
Value-added models are an important tool for measuring student achievement and school effectiveness. These models measure individual students' learning gains, controlling for students' previous knowledge. They can also control for students' background characteristics. In the area of teacher quality, value-added models offer a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.
For example, at one time a school might have known only that its fifth-grade teacher, Mrs. Jones, consistently had students who did not score at grade level on standardized assessments of reading. With value-added analysis, the school can learn that Mrs. Jones' students were reading on a third-grade level when they entered her class, and that they were above a fourth-grade performance level at the end of the school year. While not yet reaching appropriate grade level, Mrs. Jones' students had made more than a year's progress in her class. Because of value-added data, the school can see that she is an effective teacher.Teachers should be judged primarily by their impact on students.
While many factors should be considered in formally evaluating a teacher, nothing is more important than effectiveness in the classroom.
Unfortunately, districts have used many evaluation instruments, including some mandated by states, that are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom. It is often enough that teachers appear to be trying, not that they are necessarily succeeding.
Many evaluation instruments give as much weight, or more, to factors that lack any direct correlation with student performance—for example, taking professional development courses, assuming extra duties such as sponsoring a club or mentoring and getting along well with colleagues. Some instruments hesitate to hold teachers accountable for student progress. Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.
Evaluation of Effectiveness: Supporting Research
Reports strongly suggest that most current teacher evaluations are largely a meaningless process, failing to identify the strongest and weakest teachers. The New Teacher Project's report, "Hiring, Assignment, and Transfer in Chicago Public Schools", July 2007 at: http://www.tntp.org/files/TNTPAnalysis-Chicago.pdf, found that the CPS teacher performance evaluation system at that time did not distinguish strong performers and was ineffective at identifying poor performers and dismissing them from Chicago schools. See also Lars Lefgren and Brian Jacobs, "When Principals Rate Teachers," Education Next, Volume 6, No. 2, Spring 2006, pp.59-69. Similar findings were reported for a larger sample in The New Teacher Project's The Widget Effect (2009) at: http://widgeteffect.org/. See also MET Project (2010). Learning about teaching: Initial findings from the measures of effective teaching project. Seattle, WA: Bill & Melinda Gates Foundation.
A Pacific Research Institute study found that in California, between 1990 and 1999, only 227 teacher dismissal cases reached the final phase of termination hearings. The authors write: "If all these cases occurred in one year, it would represent one-tenth of 1 percent of tenured teachers in the state. Yet, this number was spread out over an entire decade." In Los Angeles alone, over the same time period, only one teacher went through the dismissal process from start to finish. See Pamela A. Riley, et al., "Contract for Failure," Pacific Research Institute (2002).
That the vast majority of districts have no teachers deserving of an unsatisfactory rating does not seem to correlate with our knowledge of most professions that routinely have individuals in them who are not well suited to the job. Nor do these teacher ratings seem to correlate with school performance, suggesting teacher evaluations are not a meaningful measure of teacher effectiveness. For more information on the reliability of many evaluation systems, particularly the binary systems used by the vast majority of school districts, see S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, and G. Whitehurst, "Evaluating Teachers: The Important Role of Value-Added." The Brookings Brown Center Task Group on Teacher Quality, 2010.
There is growing evidence suggesting that standards-based teacher evaluations that include multiple measures of teacher effectiveness—both objective and subjective measures—correlate with teacher improvement and student achievement. For example see T. Kane, E. Taylor, J. Tyler, and A. Wooten, "Evaluating Teacher Effectiveness." Education Next, Volume 11, No. 3, Summer 2011, pp.55-60; E. Taylor and J. Tyler, "The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-Career Teachers." NBER Working Paper No. 16877, March 2011; as well as H. Heneman III, A. Milanowski, S. Kimball, and A. Odden, "CPRE Policy Brief: Standards-based Teacher Evaluation as a Foundation for Knowledge- and Skill-based Pay," Consortium for Policy Research, March 2006.