More holes in the greatest VAM story ever told

-

Margarita Pivovarova, Jennifer Broatch, and Audrey Amrein-Beardsley published in the Teachers College Record a meta-review that considers some of the literature about so-called “value-added models” (VAMs), as they are used to evaluate the effectiveness of teachers in our public schools, usually by applying a formula to students’ test scores.

Specifically, they “critique and deconstruct the arguments proposed by the authors of a highly publicized study that linked teacher value-added models to students’ long-run outcomes.” That study, yet to be published though accepted for publication, is by Raj Chetty, John N Friedman, and Jonah E Rockoff.

Released without peer review as a “working paper” on the website of the National Bureau of Economic Research, the study was used by President Barack Obama in a State of the Union speech and has come under fire, especially after an April 8 report from the American Statistical Association that urged caution about using VAMs for high-stakes purposes, such as teacher recruitment, raises, hiring, or firing:

Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes.

  • VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.
  • VAMs typically measure correlation, not causation: Effects—positive or negative—attributed to a teacher may actually be caused by other factors that are not captured in the model.
  • Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.

The report warned that using VAMs in teacher evaluation could lead to unintended consequences and reduce the quality of education provided to students: “VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools,” the association wrote.

As Mr Chetty and his colleagues responded to the ASA report, which did not directly address their economics research but was clearly targeting the same subject of VAMs used in teacher evaluation schemes, the authors of the Teachers College Record analysis, which we need to point out has also not been published as a refereed article, apply the principles the ASA laid down to Chetty’s articles.

Ms Pivovarova, Ms Broatch, and Ms Amrein-Beardsley draw on recent academic literature to support their counter-arguments along the same main points of contention: causality of VAM estimates, transparency of VAMs, effect of non-random sorting of students on VAM estimates, and sensitivity of VAMs to model specification.

Chetty et al’s discussion of the ASA statement, however, should cause others pause in terms of whether in fact Chetty et al are indeed experts in the field, or not. What certainly has become evident is that they do not have their minds wrapped around the extensive set of literature or knowledge on this topic. If they had, they may not have come off as so selective, as well as biased, citing only those representing certain disciplines and certain studies to support certain assumptions and “facts” upon which their criticisms of the ASA statement were based.

Paul Katula
Paul Katulahttps://news.schoolsdo.org
Paul Katula is the executive editor of the Voxitatis Research Foundation, which publishes this blog. For more information, see the About page.

Recent Posts

Banned from prom? Mom fought back and won.

0
A mother’s challenge and a social media wave forced a Georgia principal to rethink the "safety risk" of a homeschool prom guest.

Movie review: Melania