Morsels of teacher research on offer, courtesy of Chicago

See all posts

Chicago Public Schools is under the microscope, the subject of two separate studies: one covering teacher evaluation, the other performance pay.

First, The Consortium on Chicago School Research took a close look at the district's new evaluation system, finding, for the most part, that it does in fact reliably identify strong and weak teachers.

Piloted for two years now in a total of 100 schools, the teacher-evaluation system is grounded in the ubiquitous Charlotte Danielson Framework for Teaching. Researchers tested the system to see if principals and outside observers--who conducted classroom observations side by side, but without discussion or collaboration--would give teachers "matching" ratings.

While there was general consistency among how teachers were rated, the one interesting twist was that the principals were more likely than outside observers to rate teachers "distinguished" rather than "proficient."

One principal explained that, between the two, he chose "distinguished" so as to preserve positive working relationships with his teachers by keeping ratings consistent with those given in years past, prior to implementation of the more rigorous Danielson Framework system.

But researchers also attributed ratings mismatches to a lack of understanding the complicated system on the principals' part.

On the performance-pay front, Mathematica recently released a report assessing the effectiveness of the Chicago district's Teacher Advancement Program (TAP), which went into effect in 2007 and awards bonuses to teachers based on a combination of value-added and observation performances. The study found no evidence that the performance-pay system had any positive effect on either student test scores or teacher retention rates in Chicago.

However, this was yet another Mathematica study released to the public after only a year's data. Might Mathematica--or its sponsor, the U.S. Department of Education--be persuaded to delay publication of findings until some more meaningful measures of time have passed? Premature public releases of data can do great harm, as has been the case for The New Teacher Center, reeling since Mathematica issued two consecutive reports on its induction program, both negative, until finally this year's good news.