Mark Ehlert and his colleagues at the University of Missouri take a run at answering these questions in an important CALDER working paper. Analyzing a rich panel of student data from Missouri public schools, they argue that typical value-added models (VAMs) may not adequately account for how key factors such as student demographic characteristics and school environment impact student learning, even when researchers include direct controls for these factors.
The problem, the authors argue, is that typical VAMs assume that the way in which these key factors impact learning within a school is the same as how they impact learning across schools. This turns out not to be a good assumption, and to correct for it requires adding a second step to VAMs to better account for the differences between schools. With a two-step VAM, a great teacher in a low-performing school will shine more brightly, while a teacher who might be riding on the coattails of a great school will be easier to spot. And, as the authors write, the two-step model "fully levels the playing field," all but eliminating the correlation between student demographics and the measured impact teachers have on student learning.
So what if you don't do the value-added two-step? Strong teachers in a district with a teacher evaluation system using a typical value-added model will have even more of an incentive to stay away from struggling schools, as they would appear to be less effective there. As policy makers make more and more use of value-added models, this two-step sure seems like a dance move they'll need to learn.
(My thanks to Cory Koedel, a co-author of the paper, for taking time to explain the nuances of the paper to me.)