A user's guide for VAM

See all posts

From questions about evaluation, we turn to some answers. Understanding the ins and outs of using value-added measures (VAM) to evaluate teacher performance is not an easy lift. But University of Wisconsin education policy professor Doug Harris's new book helps lighten the load.

Tops on Harris's list of misconceptions about the application of VAM:

  • Value-added can't be used to evaluate educators because different teachers teach different students. Harris points out that this is exactly what VAM is meant to account for by taking into consideration where students start and where they finish.

  • We cannot evaluate educators based on value-added because the measures have flaws. Harris notes that value-added is leaps and bounds better than any criteria currently being used to measure teacher performance.

  • Value-added measures aren't useful because they are summative, not formative. Harris responds that no single performance measure can indicate whether a teacher is a high performer and also provide information about how a teacher could improve. This is why, Harris suggests, value-added should be used in tandem with such formative assessments as principal or peer observations.

Harris is pretty straightforward about the vulnerability of VAM data, proposing three rules for employing its use:

Rule #1: Hold educators accountable only for what they can control, which means that any good VAM system must first control for factors outside schools' and teachers' control, such as class size, prior test scores, and funding.

Rule #2: Hold educators at least partly accountable for those factors they can partially control—such as student absenteeism.

Rule #3: Attach stakes to performance measures in proportion to their stability, namely be careful about the application of VAM with individual teachers. Harris suggests walking softly on its use for deciding issues like tenure by piloting a lot, as well as by combining its application with other measures like classroom observations.