Measures of Student Growth: Arizona

Teacher and Principal Evaluation Policy

Note

The data and analysis on this page is from 2019. View and download the most recent policy data and analysis on Measures of Student Growth in Arizona from the State of the States 2022: Teacher and Principal Evaluation Policies report.

Goal

The state should require objective measures of student growth to be included in a teacher's evaluation score. This goal is reorganized for 2019.

Meets goal
Suggested Citation:
National Council on Teacher Quality. (2019). Measures of Student Growth: Arizona results. State Teacher Policy Database. [Data set].
Retrieved from: https://www.nctq.org/yearbook/state/AZ-Measures-of-Student-Growth-95

Analysis of Arizona's policies

Impact of Student Growth: For all teachers in Arizona, student growth must count for between 33 and 50 percent of the overall evaluation rating. Arizona considers its teachers in two groups—group A and group B—for the purposes of evaluation.

For Group A teachers—those with available classroom-level student growth data that are reliable, aligned with the state's academic standards, and appropriate to content areas—classroom data must account for between 33 and 50 percent of a teacher's overall rating. If available and appropriate, data from statewide assessments must be used as at least one of the classroom-level data elements. School-level data is optional, but if used, these data cannot account for more than 17 percent of a teacher's overall rating, with combined classroom and school-level data not totaling more than 50 percent. The total measure of academic progress must include a calculation of student growth, which must comprise at least 20 percent of a teacher's overall rating. State assessment data must be a significant factor in the student growth calculation. 

For Group B teachers—those who have limited or no valid and reliable classroom-level student growth data—limited data, if they exist, must be incorporated but augmented with school-level data so that the sum is between 33 and 50 percent of a teacher's overall rating. If no data exist, then school-level data must account for at least 33 percent and may not exceed 50 percent of a teacher's overall rating.  

State's Role in Evaluation System: Arizona districts develop evaluation systems based on criteria provided by the state.

Citation

Recommendations for Arizona

Due to Arizona's strong policies in this area, no recommendations are provided.

State response to our analysis

Arizona recognized the factual accuracy of this analysis.

Updated: June 2019

How we graded

7A: Measures of Student Growth 

  • Student Growth: The state should require:
    • That districts use an evaluation instrument that includes objective student growth measure
The full goal score is earned based on the following:

  • Full credit: The state will earn full credit if it requires teacher evaluations to include objective measures of student growth. 

Research rationale

Many factors should be considered in formally evaluating a teacher; however, nothing is more important than effectiveness in the classroom. Value-added models are an important tool for measuring student achievement and school effectiveness.[1] These models have the ability to measure individual students' learning gains, controlling for students' previous knowledge and background characteristics. While some research suggests value-added models are subject to bias and statistical limitations,[2] rich data and strong controls can eliminate error and bias.[3] In the area of teacher quality, examining student growth offers a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.

Unfortunately, districts have used many evaluation instruments, including some mandated by states, which are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom.[4] Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.[5]


[1] Hanushek, E. A., & Hoxby, C. M. (2005). Developing value-added measures for teachers and schools. Reforming Education in Arkansas, 99-104.; Clotfelter, C. & Ladd, H. F. (1996). Recognizing and rewarding success in public schools. In H. Ladd (Ed.), Holding schools accountable: Performance based reform in education (pp. 23-64). Washington, DC: Brookings Institution Press.; Ladd, H. F., & Walsh, R. P. (2002). Implementing value-added measures of school effectiveness: Getting the incentives right. Economics of Education Review, 21(1), 1-17.; Meyer, R. H. (1996). Value-added indicators of school performance. In E. A. Hanushek (Ed.), Improving America's schools: The role of incentives, (pp. 197-223). Washington, D.C.: National Academy Press.; Braun, H. I. (2005). Using student progress to evaluate teachers: A primer on value-added models. Educational Testing Service.
[2] Rothstein, J. (2009). Student sorting and bias in value-added estimation: Selection on observables and unobservables. Education, 4(4), 537-571.; McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67-101.; Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8-15.; McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. S. (2003). Evaluating value-added models for teacher accountability. Monograph. Santa Monica, CA: RAND Corporation.
[3] Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. The American Economic Review, 104(9), 2633-2679.; Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37-65.; Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers I: Evaluating bias in teacher value-added estimates. The American Economic Review, 104(9), 2593-2632.
[4] Weisberg, D., Sexton, S., Mulhern, J., Keeling, D., Schunck, J., Palcisco, A., & Morgan, K. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. New Teacher Project.; Glazerman, S., Loeb, S., Goldhaber, D., Staiger, D., Raudenbush, S., & Whitehurst, G. (2010). Evaluating teachers: The important role of value-added. Washington, D.C.: Brookings Institution.
[5] Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2011). Identifying effective classroom practices using student achievement data. Journal of Human Resources, 46(3), 587-613.; Taylor, E. S., & Tyler, J. H. (2012). The effect of evaluation on teacher performance. The American Economic Review, 102(7), 3628-3651.