Connecting the dots...

See all posts

Over at Core Knowledge, Robert Pondiscio describes a colleague's classroom observations at a well run, efficient, no-excuses charter school where "(e)verything was ordered and timed and assessed, yet the curriculum is crap." His correspondent wonders how, in a data-driven school, curriculum decisions are anything but. According to the available data, the school is doing very well, but Pondiscio wonders about the long-view--the 8th grade slump? high school? college?--pointing to a courageous report from KIPP showing that only one-third of former middle school students had completed a four-year college more than 10 years after their KIPP experience.

This got us thinking about some other things we'd read recently, including a New York Times op-ed, from last month that, while covering familiar territory regarding the use of test scores in teacher evaluations, pointed to two thirty-year-old-studies where teachers who spent the most time presenting instruction that matched a prescribed curriculum, at a level students could understand based on previous instruction, were significantly more efficient than their peers (delivering as much as 14 more weeks of relevant instruction a year!).

And Bill Tucker has written about vibrant online teacher communities focused on the sharing and evaluation of lesson plans from around the country. He notes that such sites can alleviate planning burdens for novices in their first years, push highly effective veterans to improve their instruction, and just might provide outstanding designers some deserved recognition.

We found ourselves wondering why curriculum models, designed using the best research regarding learning and cognition; evaluated using rigorous short and long-term student outcome measures; and slowly but inexorably improved by thousands of teachers designing, implementing, critiquing and sharing lesson-plans, are so frequently absent from teacher quality, school turnaround, or ed reform debates?