Grants from the federal Teacher Incentive Fund to three big-city school districts—Chicago, New York, and Milwaukee—have been abandoned because the districts were unable to secure support from teacher unions while meeting the requirements of the grant, Education Week is reporting.
The grants, made in 2010, were for implementing performance-based compensation and professional development for the teachers in those districts. The amount to be paid out would have totaled $88 million over five years, but teachers in all three cities could not agree to the terms of the grant.
The Chicago Public Schools were in the first group of TIF grantees in 2010. They were using the grant to fund the Teacher Advancement Program (TAP), which is a common use for TIF grants among other grantees as well. However, two major studies showed that while CPS had increased the amount of teacher mentoring through TAP as well as the level of teacher retention, student improvement was disappointing.
Still a debate over teacher incentive or “performance-based” pay
Using student test scores to make decisions about compensation, retention, or bonuses for teachers is a hot topic right now, but study after study seems to indicate paying teachers more whose students get higher scores on standardized tests doesn’t result in any improvement in student performance. For example, a study of 852 accounting professors at universities in the US that used merit-based pay for professors reached the following conclusion:
Basing the compensation of accounting professors on merit pay in order to encourage better teaching, research and service is controversial. This study uses data from a survey of the 852 accounting programs in the United States to empirically examine the influence of merit-based salary plans. Findings indicate a strong positive association between the presence of a merit plan at a school and the quality of the school’s research outcomes. However, no association was found between the presence of a merit program at a school and the school’s teaching outcomes.
Let’s assume we can apply this research to elementary and secondary teachers, who, like college accounting professors, have job responsibilities where they don’t work in a classroom—paperwork, hall-monitoring, staff meetings, parent meetings—duties other than the direct teaching of students. We would expect to see those aspects of teachers’ jobs improve with incentive pay, but we should not expect improvements in student outcomes that result from their direct teaching, given this research.
Other studies, however, such as this one released just this month from Michigan State University and Cornell, suggest that “the lack of eﬀects found in US teacher incentive pay experiments probably are in some part due to speciﬁc aspects of program design rather than failure of teachers to respond to incentives more generally.”
In other words, if you didn’t see any improvement in teacher performance based on paying that teacher an incentive to be productive, you probably designed the study in such a way that didn’t induce teachers to respond to the incentive. The paper says “there is a severe lack of empirical analysis into the optimal design of such programs.” An actual effect of the incentives might be observed if studies were better designed, authors conclude.
Our study establishes that teachers do indeed respond to incentives when they are strong enough. In particular, we ﬁnd evidence that student achievement increases in response to stronger group incentives, which we interpret as coming from increases in teacher eﬀort. That is, teachers’ eﬀort increases as their contribution to the probability of award receipt increases. On average, our preferred estimates indicate that a 10 percentage point increase in teacher share increases math and social studies achievement by 0.02 standard deviations, while language scores increase by 0.014 standard deviations.
Let’s put this in plainer terms. Say teachers were getting a $1000 bonus if they increased student scores to where the mean was 75 and the standard deviation was 10. This study finds that if we tested students taught by those same teachers—under the same conditions, at the same point in their students’ learning, etc.—and offered a bonus of $1100 instead, provided student performance improved, we could expect to see the mean score on the test increase to 75.2 instead of just to 75.
The jury may still be out on whether incentive pay leads to increased student learning—or we may just have to increase the size of incentives paid based on student performance to a larger dollar amount. But what I do know, from an analysis of the studies cited here, is that the effects are very difficult to detect and require advanced statistical models. Any time you start throwing a bunch of numbers at me, I know I can find another analyst (or even perform the statistical analysis myself) who will defend with equal vigor a completely different conclusion.
Even with high-power statistics, in other words, we can barely notice a difference. These high-power stats aim for an analysis in which all factors except the hypothesized cause (the incentive pay) and measured effect (improvement in student test scores) are eliminated through the magic of changing the coefficients on the regression. That is, in the statistics world, analysts correct for any effects caused by factors other than the incentive pay. Right!
Readers are reminded that kids don’t go to school in a vacuum, even if statisticians—not teachers—can make you believe an experiment is free of all external variables. Furthermore, the fact that you need to employ such advanced statistics means most people are never going to understand what you’re doing. They may not trust you, in fact, as I’m a little skeptical myself.
Finally, the very premise of these experiments about teacher incentive pay relies on the assumption that the tests used to measure student learning are valid and can be used reliably. Tests being what they are and kids being as diverse as they are in terms of test-taking behavior and anxiety, the whole premise of any experiment is in doubt, a fact which unfortunately casts doubt on the results, whether or not those results are full of statistical analyses.
On the Web:
(The Florida Department of Education and Board of Education violated the correct process to be used in creating the rule that started performance-based pay, the judge wrote. The problem is, teacher evaluation systems with performance-based pay have already been put in place and are likely to be used for the new school year because there is no time to change them.)