Re-stirring the already stirred alt cert research pot

See all posts

The war between and among alternative certification routes and traditional teacher preparation programs continues, fueled by yet another in a string of studies of teacher effectiveness. As was widely reported, a new Mathematica study released last week fanned the flames, with TFA secondary math teachers once again proving to be substantially more effective than traditionally trained math teachers working in the same schools, no matter how much experience the TFA teachers had.  

We'd like to suggest that it's time to move beyond what appears to be a solid track record for TFA in the area of secondary math and answer more profound and important questions that pertain to the specific features of preparation that might be making the difference among secondary candidates from all types of preparation. (Not to mention that studies of elementary teachers prepared by alternative and traditional routes reveal no superiority of any route and no clue as to what might improve elementary teacher performance.)   

For example, it's intriguing to consider the more ambiguous results for TNTP's Teaching Fellows, the TFA spin-off organization created by Michelle Rhee some 15 years ago, applying the TFA model to recruit and train a largely untapped market of (mostly) career changers.

Both organizations are highly selective, admitting less than 15 percent of applicants from pools of applicants with impressive college GPAs (an average of 3.4 for TNTP and 3.6 for TFA, last we checked). Both programs offer similar training: a short training stint in the summer, a mentor assigned to the new teacher and ongoing professional development. Yet the performance of TNTP teachers was indistinguishable from that of traditionally prepared teachers. Is TFA doing something that TNTP is not?  

Grappling with questions of this type would be very helpful and could move the field forward.
 
What's not helpful is to publish the results of analysis on two separate samples in the same paper, publish graphics that compare results from the two groups (see below) and then say, as Mathematica does, that it's not fair to compare the two sets of results. Isn't such speculation inevitable? Isn't it exactly what we should be doing if we want to figure out how to prepare better teachers? Just asking.

Image via Mathematica Policy Research Brief, September 2013.