A Conversation with Indiana University's Gerardo Gonzalez

See all posts
In his 2012 State of Education address last week, Indiana superintendent Tony Bennett reaffirmed his state's intention to use data on the impact of novice teachers on student learning to study where the best teachers are trained.

The dean of Indiana University's School of Education, Gerardo Gonzalez, applauded this move. He has actively sought data from the Indiana Department of Education on the impact of his teachers on students for over a year. Since some leaders in the teacher educator field have raised serious objections to the use of value-added models for teacher preparation accountability, we decided to learn more about Dr. Gonzalez' position.

What are your thoughts on using value-added analysis to hold teacher preparation programs accountable?

I believe that any teacher education accountability process that incorporates Value-Added Models (VAM) into its evaluation schemes must confront difficult questions of reliability and validity, which is why I feel strongly that such models must be developed in close consultation with stakeholders.  

Sample sizes and other sources of error will have to be taken into account when grappling with questions of validity and reliability. But the theoretical assumptions we make in building the models, such as whether the quality of those picked to enter a teacher preparation program is an integral component of program quality or an extraneous variable to be controlled statistically, would create the foundations for whatever conclusion one may draw from the data.

You bring up the issue of program selectivity. Some have expressed concern that a teacher preparation accountability system based on VAM will not be able to disentangle the effect of choosing good candidates from the impact of the training these programs provide. What do you think?

The effects of selectivity on program outcomes could lead someone to suggest that a highly selective program adds very little value and that whatever performance differences occur among graduates from those of other programs are really a function of having more capable students. However, the fact is that you cannot separate the effects of a program from the abilities and interactions of program participants -- they are integrally interconnected. Any attempt to statistically separate the quality of the participants from program effects through the use of VAM models must pay very close attention to the limitations of such models.

The StateImpact article makes it seem as though you are in favor of Secretary Bennett's plan to award letter grades to teacher preparation programs. Is this true? What do you think the impact of this new policy will be?

I am not in favor of any policy that is developed without close consultation with primary stakeholders. A letter grade system of accountability for schools of education seems like an oversimplification of a very complex evaluation problem. Any system of accountability that neglects the science behind VAM models and does not incorporate stakeholder input is likely to be fundamentally flawed. To my knowledge, the Indiana Department of Education has not yet engaged higher education stakeholders in a meaningful discussion about evaluating teacher preparation programs in the state.

When Indiana committed to the development of a teacher education evaluation system in its NCLB waiver application, it promised to collaborate with schools of education and universities. All of us in higher education expect and welcome accountability and are eager to participate in that conversation.

-Graham Drake