TQB: Teacher Quality Bulletin

Effectively evaluating special education teachers may require a different approach

See all posts

A high-functioning teacher evaluation system should generate a lot of great data, pointing teachers in the right direction in order to advance their teaching skills. That's why findings from a new working paper from Nathan Jones (Boston University) and his coauthors may be cause for genuine concern regarding the most widely-used evaluation instrument in the country, the Danielson Group's Framework for Teaching. They find that the instrument isn't designed to identify exemplary teaching practices in a special education setting.

Special education teachers make up roughly 12% of teachers in schools and serve a population of students most acutely in need of effective instruction. Given that students with disabilities benefit from more explicit, teacher-directed instruction, evaluation instruments used to assess teachers in these classrooms need to reflect and reward those practices, which apparently, the Danielson framework does not.

Researchers evaluated videotaped lessons of 51 special education teachers in Rhode Island using the Danielson framework and found that the scores special education teachers earned were consistently lower on the tool's Instruction domain. Digging into these videotaped lessons, the researchers became convinced that the evaluation instrument and not the teachers were the cause of the low scores (not a big leap as Danielson makes no secret that it is based on an inquiry-oriented conception of teaching, where students guide their own learning and teachers act as the facilitator). When the same lessons were scored by a different instrument, one that reinforces the benefits of direct instruction (the Quality of Classroom Instruction instrument), the same teachers' scores were considerably higher.