Data Systems Needed for Evaluation: Tennessee

Teacher and Principal Evaluation Policy

Goal

The state should have a data system that contributes some of the evidence needed to assess teacher effectiveness. The bar for this goal was raised in 2017.

Meets a small part of goal
Suggested Citation:
National Council on Teacher Quality. (2017). Data Systems Needed for Evaluation: Tennessee results. State Teacher Policy Database. [Data set].
Retrieved from: https://www.nctq.org/yearbook/state/TN-Data-Systems-Needed-for-Evaluation-77

Analysis of Tennessee's policies

Teacher of Record: Tennessee requires that a student must be present for 150 days of classroom instruction per year, or 75 days per semester, before the student's record is attributable to a specific teacher. However, this definition is not explicitly articulated in state policy.

Teacher Roster Verification: Tennessee has a process in place for teacher roster verification. However, this process is not explicitly articulated in state policy.
   
Linking Student-level Data and Teacher Performance: Tennessee has the ability to link student-level data and teacher performance.

Teacher Mobility Data: Tennessee does not track teacher mobility data and make it publicly available.

Citation

Recommendations for Tennessee

Formalize a definition of teacher of record that can be used to provide evidence of teacher effectiveness.
Although Tennessee appears to have a working definition for teacher of record, NCTQ strongly urges the state to incorporate this definition into articulated state policy; doing so will help ensure that data provided through the state data system are actionable and reliable.

Formalize a process for teacher roster verification.
Although Tennessee appears to have a process in place for teacher roster verification, the state should make this process part of state policy. This is of particular importance for using the data system to provide evidence of teacher effectiveness.

Track teacher mobility data and make it publicly available.
Tennessee should not only track teacher mobility data at both the state and district levels, but it should also make these data publicly available, consistent with applicable privacy constraints. Providing detailed analyses of teacher mobility and attrition will help provide a clearer picture of Tennessee's teaching force.

State response to our analysis

Tennessee asserted that TN Compass allows the state to track teacher mobility. However, it is only available to district and state users. In addition, the districts human capital data reports provide information to districts on teacher retention, internal movements within the district, and hiring.

Tennessee also noted that student data constitutes 50 percent of the teacher effectiveness rating.

Updated: December 2017

How we graded

7E: Data Systems Needed for Evaluation 

  • Teacher of Record: The state should provide an adequate definition for "teacher of record."
  • Teacher Roster Verification: The state should have a process in place for teacher roster verification.
  • Linking Student and Teacher Data: The state should link student-level data to teacher performance data, consistent with applicable privacy constraints.
  • Tracking Teacher Mobility: The state should track teacher mobility data and ensure that it is publicly available, consistent with applicable privacy constraints.
Teacher of Record
One-quarter of the total goal score is earned based on the following:

  • One-quarter credit: The state will earn one-quarter of a point for providing an adequate definition for "teacher of record."
Teacher Roster Verification
One-quarter of the total goal score is earned based on the following:

  • One-quarter credit: The state will earn one-quarter of a point if it has a process in place for teacher roster verification.
Linking Student and Teacher Data
One-quarter of the total goal score is earned based on the following:

  • One-quarter credit: The state will earn one-quarter of a point if student-level data are linked to teacher performance data, consistent with applicable privacy constraints.
Tracking Teacher Mobility
One-quarter of the total goal score is earned based on the following:

  • One-quarter credit: The state will earn one-quarter of a point if teacher mobility data are tracked and made publicly available, consistent with applicable privacy constraints.

Research rationale

It is an inefficient use of resources for individual districts to build their own data systems for value-added analyses. States need to take the lead and provide districts with state-level data that can be used not only for the purpose of measuring teacher effectiveness, but also to track teacher mobility across the state.[1] As such, multiple years of data are necessary to enable meaningful determinations of teacher effectiveness and to identify staffing trends.[2]

Teacher effectiveness analysis, including teachers' value-added measures, requires both student and teacher identifiers and the ability to match test records over time.[3] Such data are useful not just for teacher evaluation, but also to measure overall school performance and the performance of teacher preparation programs.[4]

States need to have some advanced elements in place in order to apply data from the state data system fairly and accurately to teacher evaluations[5]. Each state must have a clear definition of "teacher of record" that connects teachers to the students they actually instruct and not just students who may be in a teacher's homeroom or for whom the teacher performs administrative but not instructional duties. There should also be in place a process for roster verification, ideally occurring multiple times a year, to ensure that students and teachers are accurately matched. Systems should also have the ability to connect multiple educators to a single student. While states may establish different business rules for such situations, what is important is that the mechanism exists, in recognition of the many possible permutations of student and teacher assignments.

Additional elements are needed to use data to assess teacher supply and demand. For example, states should include in their data systems means of tracking when teachers leave schools or districts, as well as when they re-enter new ones, and should make these data publicly available. These data can support the state's effort to build a cohesive picture of the state's teacher labor market and workforce needs.


[1] Cowan, J., Goldhaber, D., Hayes, K., & Theobald, R. (2016). Missing elements in the discussion of teacher shortages. Retrieved from http://www.caldercenter.org/missing-elements-discussion-teacher-shortages
[2] The Data Quality Campaign tracks the development of states' longitudinal data systems by reporting annually on states' inclusion of 10 elements in their data systems. Among these 10 elements are the three key elements (Elements 1, 3 and 5) that NCTQ has identified as being fundamental to the development of value-added assessment. For more information, see: Data Quality Campaign. (2017). Retrieved from http://www.dataqualitycampaign.org
[3] There is no shortage of studies using value-added methodologies by researchers including: Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation (No. w14607). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w14607.pdf?new_window=1; Hanushek, E. A., & Rivkin, S. G. (2010). Generalizations about using value-added measures of teacher quality. The American Economic Review, 100(2), 267-271.; Rothstein, J. (2010). Teacher quality in educational production: Tracking, decay, and student achievement. The Quarterly Journal of Economics, 125(1), 175-214.; Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417-458. Retrieved from http://hanushek.stanford.edu/sites/default/files/publications/Rivkin%2BHanushek%2BKain%202005%20Ecta%2073(2).pdf; Hanushek, E. A. (2010). The difference is great teachers. In K. Weber (Ed.), Waiting for Superman (pp. 81-100). New York, NY: PublicAffairs. Retrieved from http://hanushek.stanford.edu/sites/default/files/publications/Hanushek%202010%20Superman.pdf; For information about the differences between accountability models, including the differences between growth models and value-added growth models, see: Goldschmidt, P., Roschewski, P., Choi, K., Auty, W., Hebbler, S., Blank, R., & Williams, A. (2005). Policymakers' guide to growth models for school accountability: How do accountability models differ? Washington, DC: Council of Chief State School Officers. Retrieved from http://www.ccsso.org/Documents/2005/Policymakers_Guide_To_Growth_2005.pdf; For information regarding the methodologies and utility of value-added analysis, see: Koedel, C., & Betts, J. R. (2011). Does student sorting invalidate value-added models of teacher effectiveness?: An extended analysis of the Rothstein critique. Education, 6(1), 18-42.; Goldhaber, D., & Hansen, M. (2010). Assessing the potential of using value-added estimates of teacher job performance for making tenure decisions (Working Paper 31). National Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.urban.org/UploadedPDF/1001369_assessing_the_potential.pdf; Brown Center on Education Policy. Task Group on Teacher Quality, Glazerman, S., Loeb, S., Goldhaber, D. D., Raudenbush, S., & Whitehurst, G. J. (2010). Evaluating teachers: The important role of value-added (Vol. 201, No. 0). Washington, DC: Brown Center on Education Policy at Brookings. Retrieved from http://www.leg.state.vt.us/WorkGroups/EdOp/Brookings%20Value%20ADDED1117_evaluating_teachers.pdf; Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., Whitehurst, G. J., & Croft, M. (2011). Passing muster: Evaluating teacher evaluation systems. Washington: Brooking Institution, 1-36. Retrieved from https://www.brookings.edu/research/passing-muster-evaluating-teacher-evaluation-systems/; Harris, D. N. (2009). Teacher value-added: Don't end the search before it starts. Journal of Policy Analysis and Management, 28(4), 693-699.; Hill, H. C. (2009). Evaluating value-added models: A validity argument approach. Journal of Policy Analysis and Management, 28(4), 700-709.; For information about the limitations of value-added analysis, see: Rothstein, J. (2007). Do value-added models add value? Tracking, fixed effects, and causal inference. Center for Economic Policy Studies, Princeton University. Retrieved from http://www.princeton.edu/ceps/workingpapers/159rothstein.pdf; as well as: Ballou, D. (2005). Value-added assessment: Lessons from Tennessee. Value added models in education: Theory and applications, 272-297.; See also: Ballou, D. (2002). Sizing up test scores. Education Next, 2(2). Retrieved from http://educationnext.org/sizing-up-test-scores/
[4]For information about the use of student-growth models to report on student-achievement gains at the school level, see: Schochet, P. Z., & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains (NCEE 2010-4004). National Center for Education Evaluation and Regional Assistance. Retrieved by http://ies.ed.gov/ncee/pubs/20104004/pdf/20104004.pdf; as well as: Thompson, T. G., & Barnes, R. E. (2007). Beyond NCLB: Fulfilling the promise to our nation's children. The Commission on No Child Left Behind, 13-14. Retrieved from http://files.eric.ed.gov/fulltext/ED495292.pdf; See also: Walsh, K. (2007). If wishes were horses. National Council on Teacher Quality. Retrieved from: http://www.nctq.org/p/publications/docs/wishes_horses_20080316034426.pdf; National Center on Performance Incentives. (2017). Examining performance incentives in education. Vanderbilt University. Retrieved from www.performanceincentives.org
[5] For information regarding the methodologies and utility of value-added analysis, see: Koedel, C., & Betts, J. R. (2011). Does student sorting invalidate value-added models of teacher effectiveness? An extended analysis of the Rothstein critique. Education, 6(1), 18-42. Retrieved from http://www.mitpressjournals.org/doi/pdfplus/10.1162/EDFP_a_00027; Goldhaber, D., & Hansen, M. (2010). Assessing the potential of using value-added estimates of teacher job performance for making tenure decisions (Working Paper 31). National Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.urban.org/UploadedPDF/1001369_assessing_the_potential.pdf; Brown Center on Education Policy. Task Group on Teacher Quality, Glazerman, S., Loeb, S., Goldhaber, D. D., Raudenbush, S., & Whitehurst, G. J. (2010). Evaluating teachers: The important role of value-added (Vol. 201, No. 0). Washington, DC: Brown Center on Education Policy at Brookings. Retrieved from http://www.leg.state.vt.us/WorkGroups/EdOp/Brookings%20Value%20ADDED1117_evaluating_teachers.pdf; Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., Whitehurst, G. J., & Croft, M. (2011). Passing muster: Evaluating teacher evaluation systems. Brown Center on Education Policy at Brookings,1-36. Retrieved from https://www.brookings.edu/research/passing-muster-evaluating-teacher-evaluation-systems/; Harris, D. N. (2009). Teacher value-added: Don't end the search before it starts. Journal of Policy Analysis and Management, 28(4), 693-699.; Hill, H. C. (2009). Evaluating value-added models: A validity argument approach. Journal of Policy Analysis and Management, 28(4), 700-709.; Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation (No. w14607). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w14607.pdf?new_window=1; For information about the limitations of value-added analysis, see: Rothstein, J. (2007). Do value-added models add value? Tracking, fixed effects, and causal inference. Center for Economic Policy Studies, Princeton University. Retrieved from http://www.princeton.edu/ceps/workingpapers/159rothstein.pdf; as well as: Ballou, D. (2005). Value-added assessment: Lessons from Tennessee. Value added models in education: Theory and applications, 272-297.; See also Ballou, D. (2002). Sizing up test scores. Education Next, 2(2). Retrieved from http://educationnext.org/sizing-up-test-scores/