Site icon Voxitatis Blog

‘% Proficient’ can distort the student learning story

Two recent headlines in Chalkbeat paint an encouraging picture for student achievement:

Both celebrate notable progress. However, upon examining the details, the basis for these headlines boils down to a single statistic: the percentage of students scoring “proficient” or higher on a state test.

The idea of “percent proficient” is a familiar legacy of the No Child Left Behind era and its successor, the Every Student Succeeds Act. It’s easy to grasp — a higher percentage should mean more students meeting grade-level expectations. But as Harvard Graduate School of Education professor Andrew Ho warns, percent proficient is like “viewing progress through a funhouse mirror.” It can be misleading for educators, policymakers, and the public alike.

Ho’s research identifies three main problems.

1. Arbitrary markers

Same group of students,
Different percentages above and below the arbitrary cut

Proficiency cut scores are established through a process that combines expert judgment with political negotiation, and the results vary significantly. One state’s “proficient” might be far more demanding than another’s, meaning differences in percentages may reflect different standards, not actual differences in student achievement.

2. Distorted growth

The percentage proficient can exaggerate or understate changes over time, depending on where the cutoff point falls. If many students score near the line — often when a school or state hovers around 50% proficient — even small score changes can cause the percentage to swing sharply. That swing may look like a major leap forward or back when, in reality, average scores barely moved.

3. Misread achievement gaps

The same distortion affects equity discussions. If one group’s percent proficient is closer to 50%, it will appear to gain or lose ground more rapidly than another group, even if actual score changes are similar. This can lead to flawed conclusions about whether achievement gaps are widening or narrowing.

Let’s Do This Right

These pitfalls mean headlines based solely on “percent proficient” can give the wrong impression. For example, a “record-breaking” gain could simply be the result of more students crossing a line that sits at the peak of the score distribution — not a wholesale improvement in literacy. Conversely, smaller gains in percent proficient in a high-achieving group might mask real growth for kids well above the cutoff.

Ho argues that a better approach is to track changes in average scores. Unlike percent proficient, averages reflect the performance of all students, not just those clustered near an arbitrary threshold. This broader view helps teachers identify who is falling behind, encourages attention to students at all levels, and makes growth patterns more meaningful.

Yet, despite the fact that the way “percent proficient” misleads educators has been documented since the beginning of NCLB (~2002), headline writers and policymakers alike keep insisting that this one statistic tells them something meaningful.

Ultimately, the tests we want are those that not only measure but also inform. Numbers should help answer two practical questions:

That requires assessments that are timely, relevant to the curriculum, and designed with teachers, students, and parents in mind — not just policymakers and headline writers. Everything else is just detail — perhaps a convenient headline metric, but far from a complete picture of student learning.

Exit mobile version