Value and challenges in using student data for teacher prep accountability

See all posts

At the intersection of policy pushes to assess teacher effectiveness and to improve accountability systems for teacher preparation programs lie state efforts to connect student achievement data to preparation programs.  About a dozen states have begun or have announced plans to link student learning gains to programs, and many more will likely follow suit, especially if the long-awaited new HEA regulations make doing so a federal requirement. 

The use of student achievement data holds great promise because it allows objective comparison of one program with another in the same state and can help institutions improve program quality. With this great value, however, comes great challenge.  Building strong growth or value added models for teacher preparation programs is no small task.

We've taken a close look at the experience of early adopters and, in a new brief, NCTQ offers six core principles for strong design based on the models developed in three pioneering states: Louisiana, North Carolina and Tennessee

1.  Data need to be sufficiently specific, generating findings for individual programs within an institution.

2.  Identifying the outliers--the programs at the highest and lowest ends of the spectrum--is what's most important.

3.  Use an absolute standard of new teacher performance for comparison.

4.  Try to keep politics out of the technical design of the model.

5.  Check the impact of the distribution of graduates among K-12 schools in a state.

6.  Findings must be clearly communicated.

Just as student growth data have an important role to play as one of multiple measures in teacher evaluation, so too can they be used as part of a multifaceted assessment of teacher preparation programs.  These principles can help states to maximize their models' usefulness in identifying high and low performing programs and helping programs to improve.  Read more here.