What have new teachers been taught about assessment?

See all posts
It should be surprising to no one that the education field has come up with a $10 phrase to describe the questions good teachers have always asked themselves every day—such as How much have my students learned? or How can I help my students understand what they haven't learned?  The phrase-de-jour is "data driven instruction."  Translated, the term simply refers to a teacher's purposeful use of student data, especially performance data that is derived from assessments, both informal (student work samples, quizzes) and formal (standardized tests).

If you've spent any time lately in a K-12 school, you've probably seen a lot of "data driven instruction" in action. (In fact, just a few days ago, a report was released that gave it a big thumbs up as a means to improve student performance.)  Contrast the prevalence of data driven instruction in today's public schools with what future teachers are learning about it in their own teacher prep classrooms.  

Our preliminary examination, released today, finds that few teacher prep instructors have it on their radar screen.

In this first report in a series, we begin with a relatively small group of institutions (48), finding strong evidence of a disconnect between schools and preparation programs when it comes to assessment.  In particular, the need for teachers to understand, interpret and apply the lessons learned from their students' results on standardized tests is widely ignored, in spite of (or perhaps because of) districts' and states' own strong focus on such tests.  In fact, in just over half of the programs in the sample, standardized testing is scarcely mentioned, let alone taught.

Instead preparation in assessment tends to be limited to only the classroom assessments that teachers will themselves select or construct—and even then, the coverage is none too adequate.  Actual practice is pretty spotty and often is found primarily in fairly modest assessment projects teacher candidates complete independently during student teaching.  


Some might argue that our expectations are too high, that teachers in training have enough to juggle without worrying about things like test question bias, norm vs. criterion referencing and standard deviations — things that they can learn once they're on the job.  We strongly disagree.  Every teacher entering a classroom should know not only the basic taxonomy associated with assessment, but also how to display and analyze assessment results, and how to distill the instructional implications from data emerging from a variety of informal and formal assessments.

That our confidence in the capacity of teacher candidates is not misplaced is demonstrated by the fact that several programs in the sample do offer the right stuff: courses with extensive lecture coverage of assessment topics, numerous practice assignments involving assessment development, collaborative exercises in data analysis, subject-specific use of assessment data in instructional design, and support throughout from  specialized textbooks.  But here's just more evidence of a field in chaos, where not just every program in the country, but every professor, gets to decide what future teachers need.  


This policy memo, made possible by a grant from
Michael & Susan Dell Foundation, is only the beginning of the picture we will be painting of this important area of preparation. In May 2012 we will release a full report  with aggregate findings on preparation in assessment at these and an additional 150 programs (again, with the support of the Michael & Susan Dell Foundation), followed in late 2012 with program-specific ratings in the National Review of Teacher Preparation.  

Julie Greenberg