State Data Systems: Pennsylvania

Identifying Effective Teachers Policy

Goal

The state should have a data system that contributes some of the evidence needed to assess teacher effectiveness.

Meets a small part of goal
Suggested Citation:
National Council on Teacher Quality. (2013). State Data Systems: Pennsylvania results. State Teacher Policy Database. [Data set].
Retrieved from: https://www.nctq.org/yearbook/state/PA-State-Data-Systems-22

Analysis of Pennsylvania's policies

Pennsylvania does not have a data system that can be used to provide evidence of teacher effectiveness.

Pennsylvania has two of three necessary elements that would allow for the development of a student- and teacher-level longitudinal data system. The state has assigned unique student identifiers that connect student data across key databases across years, and it has the capacity to match student test records from year to year in order to measure student academic growth. Although Pennsylvania assigns teacher identification numbers, it cannot match individual teacher records with individual student records.

Commendably, Pennsylvania defines teacher of record as a professional or temporary professional educator assigned by a school entity as the primary instructor for a group of students. Although the state's teacher-student data link cannot connect more than one educator to a particular student in a given course, it does have in place a process for teacher roster verification. 

Pennsylvania does not publish data on teacher production that connects program completion, certification and hiring statistics.  

Citation

Recommendations for Pennsylvania

Develop capacity of state data system. 

Pennsylvania should ensure that its state data system is able to match individual teacher records with individual student records. 

Strengthen data link between teachers and students.

Pennsylvania should ensure that its teacher-student data link can connect more than one educator to a particular student in a given course. This is of particular importance for using the data system to provide evidence of teacher effectiveness.  

Publish data on teacher production. 

From the number of teachers who graduate from preparation programs each year, only a subset are certified, and only some of those certified are actually hired in the state. While it is certainly desirable to produce a big enough pool to give districts a choice in hiring, the substantial oversupply in some teaching areas is not good for the profession. Pennsylvania should look to Maryland's "Teacher Staffing Report" as a model whose primary purpose is to determine teacher shortage areas, while also identifying areas of surplus. By collecting similar hiring data from its districts, Pennsylvania will form a rich set of data that can inform policy decisions. 

State response to our analysis

Pennsylvania recognized the factual accuracy of this analysis. The state added that a teacher needs three consecutive school years of value-added reporting to receive a Pennsylvania Value-Added Assessment System's (PVAAS) 3-year rolling average. This can be in any state-assessed grade/subject/course.  

Pennsylvania noted that this includes LEAs submitting staff email addresses, coding LEA courses to state-tested course codes and linking each teacher who has responsibility for instruction to the students for a state-tested grade/subject/course. LEAs annually submit these data, which are then used to prepopulate the PVAAS roster verification system—a system for teachers, principals and district administrators to verify that the right students are linked to the right teachers for the right state-tested grade/subject/course and for the right percentage of instructional responsibility.

In addition, Pennsylvania is implementing new legislation for educator effectiveness, which includes teacher-specific reporting with the PVAAS. A pilot was conducted with 273 LEAs in school year 2012-2013.  Statewide implementation will occur in 2013-2014. However, it is not until 2015-2016 that a PVAAS measure can be used on a teacher's rating form.  

Teachers receiving PVAAS teacher-specific reporting are permanent or temporary professional employees who hold a valid PA teaching certificate, and who have full or partial responsibility for content-specific instruction of assessed eligible content as measured by state assessments. This may include teachers other than the teachers of record. Pennsylvania defines teacher of record as "a professional or temporary professional educator assigned by a school entity as the primary instructor for a group of students." It has a state longitudinal database called PIMS, the Pennsylvania Information Management System, which warehouses student, staff and course data. This system has been modified for 2013-2014 to align with the data needs for Pennsylvania's Educator Effectiveness system. These data will be used both in the building-level profile and in the individual teachers' evaluations.


Last word

This analysis was revised subsequent to the state's review based on updated data from the Data Quality Campaign.

Research rationale

Value-added analysis connects student data to teacher data to measure achievement and performance.

Value-added models are an important tool for measuring student achievement and school effectiveness. These models measure individual students' learning gains, controlling for students' previous knowledge. They can also control for students' background characteristics. In the area of teacher quality, value-added models offer a fairer and potentially more meaningful way to evaluate a teacher's effectiveness than other methods schools use.

For example, at one time a school might have known only that its fifth-grade teacher, Mrs. Jones, consistently had students who did not score at grade level on standardized assessments of reading. With value-added analysis, the school can learn that Mrs. Jones' students were reading on a third-grade level when they entered her class, and that they were above a fourth-grade performance level at the end of the school year. While not yet reaching appropriate grade level, Mrs. Jones' students had made more than a year's progress in her class. Because of value-added data, the school can see that she is an effective teacher.

The school could not have seen this effectiveness without a data system that connects student and teacher data. Furthermore, multiple years of data are necessary to enable meaningful determinations of teacher effectiveness. Value-added analysis requires both student and teacher identifiers and the ability to match test records over time.

It is an inefficient use of resources for individual districts to build their own data systems for value-added analyses.

States need to take the lead and provide districts with state-level data that can be used for the purpose of measuring teacher effectiveness.  All states have longitudinal data systems, but not all states are yet able to connect student data to individual teacher records.  Such data is useful not just for teacher evaluation but also to measure overall school performance and the performance of teacher preparation programs. 

State Data Systems: Supporting Research

The Data Quality Campaign tracks the development of states' longitudinal data systems by reporting annually on states' inclusion of 10 elements in their data systems. Among these 10 elements are the three key elements (Elements 1, 3 and 5) that NCTQ has identified as being fundamental to the development of value-added assessment. For more information, see http://www.dataqualitycampaign.org.

For information about the use of student-growth models to report on student-achievement gains at the school level, see P. Schochet and H. Chiang, "Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains", July 2010, U.S. Department of Education, NCEE 2010-4004; as well as The Commission on No Child Left Behind, Commission Staff Research Report: Growth Models, An Examination Within the Context of NCLB, Beyond NCLB: Fulfilling the Promise to Our Nation's Children, 2007.

For information about the differences between accountability models, including the differences between growth models and value-added growth models, see P. Goldschmidt, P. Roschewski, K Choi, W. Auty, S. Hebbler, R. Blank, and A. Williams, "Policymakers' Guide to Growth Models for School Accountability: How Do Accountability Models Differ?" Council of Chief State School Officers' Report, 2005 at: http://www.ccsso.org/Documents/2005/Policymakers_Guide_To_Growth_2005.pdf

For information regarding the methodologies and utility of value-added analysis see, C. Koedel and J. Betts, "Does Student Sorting Invalidate Value-Added Models of Teacher Effectiveness? An Extended Analysis of the Rothstein Critique", Education Finance and Policy, Volume 6, No. 1, Winter 2011, pp. 18-42; D. Goldhaber and M. Hansen, "Assessing the Potential of Using Value-Added Estimates of Teacher Job Performance for Making Tenure Decisions." The Urban Institute/Calder, February 2010, Working Paper 31, and S. Glazerman, S. Loeb, D. Goldhaber, D. Staiger, S. Raudenbush, and G. Whitehurst, "Evaluating Teachers; The Important Role of Value-Added." Brookings Brown Center Task Group on Teacher Quality, November 2010; S. Glazerman, D. Goldhaber, S. Loeb, S. Raudenbush, D. Staiger, G. Whitehurst, and M. Croft, Passing Muster: Evaluating Teacher Evaluation Systems, The Brookings Brown Center Task Group on Teacher Quality, April 2011; D. N. Harris, "Teacher value-added: Don't end the search before it starts," Journal of Policy Analysis and Management, Volume 28, No. 4, Autumn 2009, pp. 693-699. H.C. Hill, "Evaluating value-added models: A validity argument approach," Journal of Policy Analysis and Management, Volume 28, No. 4, Autumn 2009, pp. 700-709; T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No. 14607, December 2008.

There is no shortage of studies using value-added methodologies by researchers including T.J. Kane, E. Hanushek, S. Rivkin, J.E. Rockoff, and J. Rothstein. See also T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No. 14607, December 2008; E.A. Hanushek and S.G. Rivkin, "Generalizations about using value-added measures of teacher quality." American Economic Review , Volume 100, No. 2, May 2010, pp. 267-271; J. Rothstein, 2010. "Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement."The Quarterly Journal of Economics, Volume 125, No. 1,February 2010, pp. 175-214; T.J. Kane and D.O. Staiger, "Estimating teacher impacts on student achievement: An experimental evaluation". National Bureau of Economic Research, Working Paper No.14607, December 2008. S.G. Rivkin, E.A. Hanushek, and J.F. Kain. "Teachers, Schools, and  Academic Achievement." Econometrica, Volume 73, No. 2, March 2005, pp. 417-458; E.A. Hanushek, 2010, "The Difference is Great Teachers," In Waiting for "Superman": How We Can Save America's Failing Public Schools, Karl Weber, ed., pp. 81-100, New York: Public Affairs.

See also NCTQ's "If Wishes Were Horses" by Kate Walsh at: http://www.nctq.org/p/publications/docs/wishes_horses_20080316034426.pdf and the National Center on Performance Incentives at: www.performanceincentives.org.

For information about the limitations of value-added analysis, see Jesse Rothstein, "Do Value-Added Models Add Value? Tracking, Fixed Effects, and Causal Inference." Princeton University and NBER. Working Paper No. 159, November 2007 as well as Dale Ballou, "Value-added Assessment: Lessons from Tennessee," Value Added Models in Education: Theory and Applications, ed. Robert W. Lissitz (Maple Grove, MN: JAM Press, 2005). See also Dale Ballou, "Sizing Up Test Scores," Education Next, Volume 2, No. 2, Summer 2002, pp. 10-15.