NC: Student growth is tracked by the state but used only to drive professional development, and for school, district, and state reporting.
No. Districts design their own evaluation system based on specific criteria from the state. : AK, AR, AZ, CA, CO, CT, DC, FL, IA, ID, IL, IN, KY, MA, MD, ME, MI, MN, MT, ND, NE, NH, NJ, NM, NV, NY, OR, SD, UT, VA, VT, WY
Updated: December 2017
Select another topic
How we graded
7A: Measures of Student Growth
- Student Growth: The state should require:
- That districts use an evaluation instrument that includes objective student growth measures.
- That the evaluation instruments used by districts are structured so that any teacher who is not rated as at least effective on measures reflecting student growth is not eligible to earn an overall rating of effective.
The full goal score is earned based on the following:
- Full credit: The state will earn full credit if it requires teachers to achieve a student growth rating of at least effective in order to receive a summative rating of effective.
- Three-quarters credit: The state will earn three-quarters of a point if it requires teachers to earn a student growth rating that is greater than ineffective in order to earn a summative rating of effective.
- One-half credit: The state will earn one-half of a point if it requires objective measures of student growth to count for at least 33 percent of the summative score, but it does not require teachers to meet their student growth goals in order to be rated overall effective.
- One-quarter credit: The state will earn one-quarter of a point if its evaluation instrument requires objective measures of student growth to count for less than 33 percent of the summative score, but it does not require teachers to meet their student growth goals in order to be rated overall effective.
Many factors should be considered in formally evaluating a teacher; however, nothing is more important than effectiveness in the classroom. Value-added models are an important tool for measuring student achievement and school effectiveness. These models have the ability to measure individual students' learning gains, controlling for students' previous knowledge and background characteristics. While some research suggests value-added models are subject to bias and statistical limitations, rich data and strong controls can eliminate error and bias. In the area of teacher quality, examining student growth offers a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.
Unfortunately, districts have used many evaluation instruments, including some mandated by states, which are structured so that teachers can earn a satisfactory rating without any evidence that they are sufficiently advancing student learning in the classroom. Teacher evaluation instruments should include factors that combine both human judgment and objective measures of student learning.
 Hanushek, E. A., & Hoxby, C. M. (2005). Developing value-added measures for teachers and schools. Reforming Education in Arkansas, 99-104.; Clotfelter, C. & Ladd, H. F. (1996). Recognizing and rewarding success in public schools. In H. Ladd (Ed.), Holding schools accountable: Performance based reform in education (pp. 23-64). Washington, DC: Brookings Institution Press.; Ladd, H. F., & Walsh, R. P. (2002). Implementing value-added measures of school effectiveness: Getting the incentives right. Economics of Education Review, 21(1), 1-17.; Meyer, R. H. (1996). Value-added indicators of school performance. In E. A. Hanushek (Ed.), Improving America's schools: The role of incentives, (pp. 197-223). Washington, D.C.: National Academy Press.; Braun, H. I. (2005). Using student progress to evaluate teachers: A primer on value-added models. Educational Testing Service.
 Rothstein, J. (2009). Student sorting and bias in value-added estimation: Selection on observables and unobservables. Education, 4(4), 537-571.; McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67-101.; Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8-15.; McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. S. (2003). Evaluating value-added models for teacher accountability. Monograph. Santa Monica, CA: RAND Corporation.
 Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. The American Economic Review, 104(9), 2633-2679.; Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37-65.; Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers I: Evaluating bias in teacher value-added estimates. The American Economic Review, 104(9), 2593-2632.
 Weisberg, D., Sexton, S., Mulhern, J., Keeling, D., Schunck, J., Palcisco, A., & Morgan, K. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. New Teacher Project.; Glazerman, S., Loeb, S., Goldhaber, D., Staiger, D., Raudenbush, S., & Whitehurst, G. (2010). Evaluating teachers: The important role of value-added. Washington, D.C.: Brookings Institution.
 Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2011). Identifying effective classroom practices using student achievement data. Journal of Human Resources, 46(3), 587-613.; Taylor, E. S., & Tyler, J. H. (2012). The effect of evaluation on teacher performance. The American Economic Review, 102(7), 3628-3651.