The backstory on dislodging pass rate data

See all posts

Twenty three years after the U.S. Congress first tried—but largely failed—to dislodge the pass rates on teacher licensing tests, the dam has been broken. Last week, with surprisingly strong cooperation from 39 states, NCTQ published the pass rates of aspiring elementary teachers on tests verifying they have the content knowledge needed to teach. Within a few days of the initial release, a 40th state (Mississippi) turned in its data. More appear likely to follow.

Given the intense efforts to keep the data hidden, one pleasant surprise has been the response from teacher prep programs. We had assumed that many programs would continue to claim that the data weren't meaningful or that they unfairly advantaged the most selective schools. Since releasing the data, we've heard from an unprecedented number of institutions and programs genuinely thanking us for giving them what they had long wanted and needed to pursue improvements. We also heard from programs that had no idea that they were excelling, thrilled to learn that they might have a secret sauce to share with others.

So why the historic resistance, resistance that was so strong that it defeated the intent of Congress? Certainly, there is much in the data that might explain why some programs wouldn't have wanted it public. One in four institutions have more test takers failing than passing the first time out—a statistic that is not irrelevant (as some assert), but matters a lot. Large numbers of test takers only make a single attempt, particularly persons of color, perhaps put off by discouragement, costly retake fees, and delays in securing a teaching job. And even on best-attempt pass rates, there was startling variability among institutions.

What's always been less clear is why states—which must confer approval on programs to operate—have also appeared quite willing to overlook this data (and in the case of 11 states, still are) and, intriguingly, what finally made so many change their minds.

A lot has to do with higher ed's intimidation of state education agencies (and for that matter, much of K-12). Many state education officials with whom we communicated over the last couple of years described their relationship with their teacher prep programs as one of partnership. Sounds very kumbaya and collaborative, but it's also inappropriate. State education agencies are defined by statute as programs' regulators. (It's hard to imagine bank regulators publicly referring to Citibank or JP Morgan as "their partner" without getting hauled up to Congress.)

The role that the two test designers and publishers, ETS and Pearson, have played in suppressing the data deserves a lot more attention than it has gotten. Apparently, ensuring that no one knows their pass rates must be good for business, as how else to explain how much control over states' tests remain with the publishers, not the states. Few states had the internal know-how to generate the basic data reports NCTQ requested, with almost all having to turn to their publisher to do so. It didn't take long for us to experience some of the same frustrations states must have felt over the years, as some of the initial reports from ETS were highly misleading (a fact which, to the credit of ETS, they owned up to immediately and then proceeded to cooperate fully with these requests).

Meanwhile, the other testing company, Pearson, took a hostile stance. When states asked them for the data to fulfill our request, it charged them (and us) under the excuse that it was beyond the terms of their contracts to tell states in any detail how many test takers pass their tests. In a number of instances, the company then did not turn over what was requested, accused NCTQ of not being sufficiently clear in our request, and often refused to make corrections even when their client states asked them to do so—absent more money changing hands.

What's been far more damaging, however, is the failure by both of the test publishers to provide program-level pass rate data, claiming that separating out who is a formal candidate in a program from who is not was just too difficult a task. Absent program-level data upon which programs can act, the data have limited utility.

Given our own experience, it could be that wrestling the data out of the hands of test publishers explains much of why states haven't previously reported this data. How else to explain the change of heart? The truth is that states looked upon the NCTQ request as a welcome opportunity. That's not to say they weren't nervous, but they were willing.

It may also be the case that it was easier for states to contemplate this data going public when someone else could take the heat, a role NCTQ is accustomed to taking and happy to fill. We expect states may well encounter a more receptive audience from their programs than they had imagined. The initial rush from this broken dam has proven to be more calming than ruinous, flowing into long parched ground that happened to be thirsty for change.

Explore states' teacher licensure pass rate data in NCTQ's new release: