State Data Systems: Texas

Identifying Effective Teachers Policy

Goal

The state should have a data system that contributes some of the evidence needed to assess teacher effectiveness.

Meets goal in part
Suggested Citation:
National Council on Teacher Quality. (2011). State Data Systems: Texas results. State Teacher Policy Database. [Data set].
Retrieved from: https://www.nctq.org/yearbook/state/TX-State-Data-Systems-8

Analysis of Texas's policies

Texas does not have a data system that can be used to provide evidence of teacher effectiveness.

However, Texas does have two of three necessary elements that would allow for the development of a student- and teacher-level longitudinal data system. The state has assigned unique student identifiers that connect student data across key databases across years, and it has the capacity to match student test records from year to year in order to measure student academic growth.

Although Texas assigns teacher identification numbers, it cannot match individual teacher records with individual student records.

Citation

Recommendations for Texas

Develop capacity of state data system.
Texas should ensure that its state data system is able to match individual teacher records with individual student records. 

Develop a clear definition of "teacher of record."
A definition of teacher of record is necessary in order to use the student-teacher data link for teacher evaluation and related purposes. Texas defines the teacher of record as the teacher who is responsible for the classroom, determines the instruction delivered and assigns the final grades. However, to ensure that data provided through the state data system are actionable and reliable, Texas should articulate a more distinct definition of teacher of record and require its consistent use throughout the state.

State response to our analysis

Texas asserted that it defines teacher of record as an educator employed by a school district who teaches the majority of the instructional day in an academic instructional setting and is responsible for evaluating student achievement and assigning grades. 

The state also pointed out that in 2010-2011, it established the teacher-to-student link in the Public Education Information Management System (PEIMS) and, in September 2010, contracted with the Project on Educator Effectiveness and Quality (PEEQ) to develop a metric that measured a teacher's effect on student achievement. The objective was to assess the performance of new teachers in their first three years in the classroom and provide feedback to the preparation programs, teachers and policymakers to improve the quality of teaching and enhance student learning. Texas noted that PEEQ is developing a comprehensive assessment of a teacher's effectiveness that will consist of a value-added component and other qualitative measures, such as a principal survey based on classroom observations. This metric will serve as the third standard of the accountability system for educator preparation programs, and a pilot metric is expected to be available in March 2012. Although the analysis is not yet complete, using 2010-2011 data, the state will likely have evidence of effectiveness of some teachers.  

Finally, Texas added that it has received a Statewide Longitudinal Data System grant that will allow it to transform the existing Texas Public Education Information Resource (TPEIR) data warehouse into a model that will further the use of more robust, timely performance data for elementary, secondary and postsecondary education. The enhanced TPEIR database, modified to include student/teacher linkages throughout the P-20 continuum, will build the capacity to make decisions based on evidence of effectiveness at multiple levels and for multiple purposes: at the local level for improved P-12 performance, at the state level for policymaking and scaling up of interventions that prove successful and at the national level for research into policies and practices that close the gaps and improve performance for all students. A portion of the grant will focus on student achievement, teacher effectiveness and teacher preparation.

Research rationale

The Data Quality Campaign tracks the development of states' longitudinal data systems by reporting annually on states' inclusion of 10 elements in their data systems. Among these 10 elements are the three key elements (Elements 1, 3 and 5) that NCTQ has identified as being fundamental to the development of value-added assessment. For more information, see http://www.dataqualitycampaign.org.

For information about the use of student-growth models to report on student-achievement gains at the school level, see P. Schochet and H. Chiang, "Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains." Mathematica Policy Research. Department of Education (2010); as well as The Commission on No Child Left Behind, "Commission Staff Research Report: Growth Models, An Examination Within the Context of NCLB," Beyond NCLB, 2007.

For information about the differences between accountability models, including the differences between growth models and value-added growth models, see Pete Goldschmidt, et al., "Policymakers' Guide to Growth Models for School Accountability: How Do Accountability Models Differ?" Council of Chief State School Officers' Report, 2005 at: http://www.ccsso.org/publications/details.cfm?PublicationID=287

For information regarding the methodologies and utility of value-added analysis see, C. Koedel and J. Betts, "Does Student Sorting Invalidate Value-Added Models of Teacher Effectiveness? An Extended Analysis of the Rothstein Critique." Education Finance and Policy Vol. 6 No. 1 (2011), D. Goldhaber and M. Hansen, "Assessing the Potential of Using Value-Added Estimates of Teacher Job Performance for Making Tenure Decisions." Urban Institute (2010), and S. Glazerman et al, "Evaluating Teachers; The Important Role of Value-Added." Brookings Brown Center Task Group on Teacher Quality (2011); Glazerman, Steven et. al., Passing Muster: Evaluating Teacher Evaluation Systems, The Brookings Brown Center Task Group on Teacher Quality (2011); Harris, D.N.  (2009). "Teacher value-added: Don't end the search before it starts," Journal of Policy Analysis and Management, 28(4), pp. 693-699. Hill, H.C. (2009). "Evaluating value-added models: A validity argument approach," Journal of Policy Analysis and Management, 28(4), pp. 700-709; Kane, T.J. & Staiger, D.O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation. NBER Working Paper W14607. Cambridge, MA: National Bureau of Economic Research.

There is no shortage of studies using value-added methodologies by researchers including Thomas J. Kane, Eric Hanushek, Steven Rivkin, Jonah E. Rockoff and Jessie Rothstein. See also Kane, T.J. 2008. Estimating teacher impacts on student achievement: An experimental evaluation. Working Paper 14607. Cambridge, MA: National Bureau of Economic Research; Hanushek, Erik A. and Steven G. Rivkin. "Generalizations about using value-added measures of teacher quality." American Economic Review (May 2010); Rothstein, Jesse. 2010. "Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement." Quarterly Journal of Economics, 25(1); Kane, Thomas J. and Douglas O. Staiger. 2008. "Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation." National Bureau of Economic Research W14607, December. Rivkin, Steven G.; Eric A. Hanushek and John F. Kain. 2005. "Teachers, Schools, and  Academic Achievement." Econometrica, 73(2), pp. 417-58; Hanushek, Eric A. 2010. "The Difference is Teacher Quality." In Waiting for "Superman": How We Can Save America's Failing Public Schools, Karl Weber, ed. New York: Public Affairs.

See also NCTQ's "If Wishes Were Horses" by Kate Walsh at: http://www.nctq.org/p/publications/docs/wishes_horses_20080316034426.pdf and the National Center on Performance Incentives at: www.performanceincentives.org.

For information about the limitations of value-added analysis, see Jesse Rothstein, "Do Value-Added Models Add Value? Tracking, Fixed Effects, and Casual Inference." Princeton University and NBER. (2007) as well as Dale Ballou, "Value-added Assessment: Lessons from Tennessee," Value Added Models in Education: Theory and Applications, ed. Robert W. Lissitz (Maple Grove, MN: JAM Press, 2005).See also Dale Ballou, "Sizing Up Test Scores," Education Next, Summer 2002; 2(2).