We're about three-quarters of the way through this project. Of the roughly 4,000 articles we've looked at so far, 154 touch upon the nuts-and-bolts of teacher preparation. And of these, only 14 have used top-quality methods to quantitatively measure the impact that characteristics of teacher preparation have on teacher effectiveness.
Here is one particularly disconcerting leitmotif we've found: researchers across the board are deeming their findings "significant" based on participants' self reports on their dispositions. Here's an all-too typical example: Does a literacy class make preservice teachers feel more comfortable with the prospect of teaching kids how to read? The professor "measures" the teacher candidates' literacy dispositions at the start of the semester with a survey. Then he has them take a similar (if not identical) survey at the end of the semester. Sometimes they even throw in an interview to probe deeper into students' affective responses ("What successes, challenges, and frustrations did you experience [in this class] and how did you deal with them?" and "How do you now feel about your ability and knowledge as a future reading teacher?") Look, the "dispositional" scores have increased! This class must be an effective way to teach preservice teachers about literacy! Problem solved.
The methodological problems with this sort of research are so numerous it's hard to know where to begin.
The biggest pitfall is probably that surveys given by a course professor to his or her students with such leading questions encourage respondents to give the answer that will "satisfy the researcher."
But even if such methodological problems could be overcome, these are ultimately studies of self-reports about teachers' feelings. What about the reading skills of the students of these candidates when they became teachers? Did they improve? If they didn't, wouldn't it actually be worse if the teachers felt confident about their ability to teach reading?