Identifying Effective Teachers Policy
The state should have a data system that contributes some of the evidence needed to assess teacher effectiveness.
Wisconsin's longitudinal data system for providing evidence of teacher effectiveness is mandated, or data system use is required in state policy.
Wisconsin's evaluation policy defines a teacher as an employee whose primary responsibilities include managing a classroom environment and planning for, delivering and assessing student instruction over time. The state does not have a process in place for teacher roster verification.
Wisconsin does not publish data on teacher production that connects program completion, certification and hiring statistics.
Data Quality Campaign www.dataqualitycampaign.org
Strengthen data link between teachers and students.
Although the state's teacher-student data link can connect more than one educator to a particular student in a given course, Wisconsin should put in place a process for teacher roster verification. This is of particular importance for using the data system to provide evidence of teacher effectiveness.
Publish data on teacher production.
From the number of teachers who graduate from preparation programs each year, only a subset are certified, and only some of those certified are actually hired in the state. While it is certainly desirable to produce a big enough pool to give districts a choice in hiring, the substantial oversupply in some teaching areas is not good for the profession. Wisconsin should look to Maryland's Teacher Staffing Report"as a model whose primary purpose is to determine teacher shortage areas, while also identifying areas of surplus. By collecting similar hiring data from its districts, Wisconsin will form a rich set of data that can inform policy decisions.
Wisconsin recognized the factual accuracy of this analysis. The state added that it is currently working with the Wisconsin Value-Added Research Center (VARC) to create a teacher roster verification process. VARC has already researched the existing data collections to determine strengths and areas for growth in supporting the development of the verification process.
It is an inefficient
use of resources for individual districts to build their own data systems for
States need to take the lead and provide districts with state-level data that can be used for the purpose of measuring teacher effectiveness. Furthermore, multiple years of data are necessary to enable meaningful determinations of teacher effectiveness. Value-added analysis requires both student and teacher identifiers and the ability to match test records over time. Such data is useful not just for teacher evaluation but also to measure overall school performance and the performance of teacher preparation programs.
Additional elements are needed to use data to assess teacher effectiveness.
States need to have some advanced elements in place in order to apply data from the state data system fairly and accurately to teacher evaluations. State must have a clear definition of teacher of record that connects teachers to the students they actually instruct and not just students who may be in a teacher's homeroom or for whom the teacher performs administrative but not instructional duties. There should also be in place a process for roster verification, ideally occurring multiple times a year, to ensure that students and teachers are accurately matched. Systems should also have the ability to connect multiple educators to a single student. While states may establish different business rules for such situations, what it is important is that the mechanism exists, in recognition of the many possible permutations of student and teacher assignments.
State Data Systems: Supporting Research
The Data Quality Campaign tracks the development of states' longitudinal data systems by reporting annually on states' inclusion of 10 elements in their data systems. Among these 10 elements are the three key elements (Elements 1, 3 and 5) that NCTQ has identified as being fundamental to the development of value-added assessment. For more information, see http://www.dataqualitycampaign.org.
For information about the use of student-growth models to report on student-achievement gains at the school level, see P. Schochet and H. Chiang, "Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains", July 2010, U.S. Department of Education, NCEE 2010-4004; as well as The Commission on No Child Left Behind, Commission Staff Research Report: Growth Models, An Examination Within the Context of NCLB, Beyond NCLB: Fulfilling the Promise to Our Nation's Children, 2007.
For information about the differences between accountability models, including the differences between growth models and value-added growth models, see P. Goldschmidt, P. Roschewski, K Choi, W. Auty, S. Hebbler, R. Blank, and A. Williams, "Policymakers' Guide to Growth Models for School Accountability: How Do Accountability Models Differ?" Council of Chief State School Officers' Report, 2005 at: http://www.ccsso.org/Documents/2005/Policymakers_Guide_To_Growth_2005.pdf
For information regarding the methodologies and utility of value-added analysis see, C. Koedel and J. Betts, "Does Student Sorting Invalidate Value-Added Models of Teacher Effectiveness? An Extended Analysis of the Rothstein Critique", Education Finance and Policy, Volume 6, No. 1, Winter 2011, pp. 18-42; D. Goldhaber and M. Hansen, "Assessing the Potential of Using Value-Added Estimates of Teacher Job Performance for Making Tenure Decisions." The Urban Institute/Calder, February 2010, Working Paper 31, and S. Glazerman, S. Loeb, D. Goldhaber, D. Staiger, S. Raudenbush, and G. Whitehurst, "Evaluating Teachers; The Important Role of Value-Added." Brookings Brown Center Task Group on Teacher Quality, November 2010; S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, G. Whitehurst, and M. Croft, Passing Muster: Evaluating Teacher Evaluation Systems, The Brookings Brown Center Task Group on Teacher Quality, April 2011; D. N. Harris, "Teacher value-added: Don't end the search before it starts," Journal of Policy Analysis and Management, Volume 28, No. 4, Autumn 2009, pp. 693-699. H.C. Hill, "Evaluating value-added models: A validity argument approach," Journal of Policy Analysis and Management, Volume 28, No. 4, Autumn 2009, pp. 700-709; T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No. 14607, December 2008.
There is no shortage of studies using value-added methodologies by researchers including T.J. Kane, E. Hanushek, S. Rivkin, J.E. Rockoff, and J. Rothstein. See also T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No. 14607, December 2008; E.A. Hanushek and S.G. Rivkin, "Generalizations about using value-added measures of teacher quality." American Economic Review , Volume 100, No. 2, May 2010, pp. 267-271; J. Rothstein, 2010. "Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement."The Quarterly Journal of Economics, Volume 125, No. 1,February 2010, pp. 175-214; T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No.14607, December 2008. S.G. Rivkin, E.A. Hanushek, and J.F. Kain. "Teachers, Schools, and Academic Achievement." Econometrica, Volume 73, No. 2, March 2005, pp. 417-458; E.A. Hanushek, 2010, "The Difference is Great Teachers," In Waiting for "Superman": How We Can Save America's Failing Public Schools, Karl Weber, ed., pp. 81-100, New York: Public Affairs.
See also NCTQ's "If Wishes Were Horses" by Kate Walsh at: http://www.nctq.org/p/publications/docs/wishes_horses_20080316034426.pdf and the National Center on Performance Incentives at: www.performanceincentives.org.
For information about the limitations of value-added analysis, see Jesse Rothstein, "Do Value-Added Models Add Value? Tracking, Fixed Effects, and Causal Inference." Princeton University and NBER. Working Paper No. 159, November 2007 as well as Dale Ballou, "Value-added Assessment: Lessons from Tennessee," Value Added Models in Education: Theory and Applications, ed. Robert W. Lissitz (Maple Grove, MN: JAM Press, 2005). See also Dale Ballou, "Sizing Up Test Scores," Education Next, Volume 2, No. 2, Summer 2002, pp. 10-15.