Reviewing the Review (and other types of teacher prep evaluation)

See all posts

When the National Academy of Education announced a couple of years ago that it would produce a report on the evaluation of teacher preparation programs, we didn't know whether to be gratified or worried. It was clear that the Teacher Prep Review had served as an impetus for the report. On the other hand, the report's steering committee was led by a scholar who had dismissed the Review as little more than an attempt to garner "splashy magazine cover stories."

Issued last Friday, the Academy's report, Evaluation of Teacher Preparation Programs, is notable for its even-handedness. It describes the strengths and weaknesses of each of the main forms of evaluating teacher training: federal Title II reporting, state approval processes, value-added modeling of the effectiveness of program graduates, national accreditation, program self-evaluation and the Teacher Prep Review. The report found warts on all of them.

Of the Review, the report warns that programs may doctor their syllabi to earn higher ratings (a concern of ours as well, which is why we perform audits to make sure we get authentic documents). But while the report repeats some of the superficial criticism that greeted the Review's publication, it also acknowledges the steps we have taken to mitigate some of the unintended consequences of ratings efforts. Bottom line: the report accords the Review's purpose and basic methods with as much legitimacy as the other evaluation methods it examined. That seems to be a fairly big step forward, considering the vociferous protests the leaders of the field leveled against the very idea of the Review when we launched it in 2011.

If our concerns about how the report would treat our work were largely misplaced, neither were our hopes for the contribution it might make justified. Though the report's authors agree that the perfect should not be the enemy of the good, no approach that they reviewed was good enough for them to recommend in whole, in part or in combination. Instead, they ask that policymakers and program leaders answer seven basic questions as they develop systems of evaluating teacher preparation programs. The questions are common-sensical enough -- what is the primary purpose of the evaluation system? What are the most important aspects of teacher preparation? How will the evaluation system itself be evaluated? and so forth. But it would be hard to imagine that anyone would seriously undertake the evaluation of teacher preparation programs without answering these questions.

Evaluating teacher preparation programs is devilishly tricky business, so perhaps concrete and detailed guidance is too much to ask. But one cannot help wondering whether the report's questions to evaluators constitute yet another delaying tactic by a field that seems unwilling to acknowledge the urgency of scrutinizing how tomorrow's teachers are trained.