The value-add of value-add

See all posts

News comes today of an unwanted Christmas surprise: DCPS acknowledged errors in its IMPACT teacher evaluation system that affected 44 teachers in 2012-3, including one who was wrongly terminated.

Understandably (if predictably), local and national union leaders argue that the errors demonstrate that the use of value-added analysis in evaluating teachers is irredeemably flawed. Washington Teachers Union president Elizabeth Davis told the Washington Post, "The idea of attaching test scores to a teacher's evaluation - that idea needs to be junked." Randi Weingarten, president of the American Federation of Teachers, chimed in to The Huffington Post, "You can't simply take a bunch of data, apply an algorithm, and use whatever pops out of a black box to judge teachers, students and our schools."

Does IMPACT warrant this kind of opprobrium?

Any error which costs someone their job is, as Jason Kamras, chief of human capital for DCPS put it plainly, "unacceptable." And while he aims to make the dismissed teacher "completely whole," including giving him or her back her job, no doubt that will come as cold comfort to the teacher who may have been looking for work since the summer.

Still, DCPS deserves some credit for coming clean about this error completely on its own. And even though the errors also meant that 22 of the 44 teachers got higher ratings than they should have, DCPS will not adjust those ratings downward.

What's more, eliminating the value-added analysis from evaluation won't eliminate error, it will just make it harder to catch. The "black box" of statistical analysis will be replaced with the "black box" of what goes on inside principals' heads. As the MET project showed, human observers alone do a relatively poor job at determining which teachers are best at promoting learning. 

Much more reliable is an evaluation system like IMPACT, which makes test scores no more than 35 percent of a teacher's rating and which incorporates observations from multiple trained evaluators.  With research showing that IMPACT helps DCPS identify and keep its best teachers, and results from TUDA showing DCPS is making headway, junking IMPACT doesn't seem wise.

We have to be fair to both teachers and students. Judging from the early returns from newly implemented teacher evaluation systems, principals still generally prefer to treat their teachers like widgets, often assessing 95+ percent of their teachers as "effective." This sort of systemic error means too many teachers aren't getting the feedback they need to get better and too many students have teachers who aren't helping them progress. Value-added analysis serves as an important check on principals who rate building harmony higher than making hard decisions about which teachers might be in the wrong profession.

This won't be the last time that these sorts of errors in value-added analysis are discovered. But let's weigh the errors that we can see against those that we often can't - but which nonetheless have profound repercussions for the life trajectories of students.