INDIANAPOLIS (Nov. 14, 2009)—The marching band from North Hardin High School in Radcliff, Ky., takes the field at Lucas Oil Stadium in semi-final competition at the Bands of America Grand National Championships.
Directed by Brian Froedge and led on the field by drum majors Danbee Leethacker, Miriah Grady, Javon Tolbert, and Nichelle Green, the show is entitled “KA.” As it opens, a purple strip unfolds amid music that explodes on a dime.
The band has put this year’s performance for the Kentucky Music Educators Association on YouTube, but as of November 2014, the video had been taken down. Might be a copyright issue.
The title refers to an unknown god, and the word has been used as an epithet of Prajapati and Brahma. It also has been used to refer to the spiritual part of the soul that survives after death, such as in Egyptian mythology.
Music is by Ryan George and includes a flute/clarinet duet in the middle movement, which is joined by a trumpet quartet on the right 40-yard line, playing a rather melancholy passage.
This is followed by other small ensembles, each giving a new birth to part of the soul.
The show closes with two marchers pulling a tarp over the band, symbolizing death. A girl inside a volcano structure, apparently the god of the show’s title, is worshipped by other dancers as she rolls out of the mountain, climactic music sounding the show’s finale.
For the graphs below, we have sketched each band’s percentile on the trait specified (individual music and music ensemble here) in semi-finals against their percentile on that trait in the prelims.
For example, a point at 36% on the horizontal axis and 9% on the vertical axis (North Hardin in individual music) would indicate that the band was placed at the ninth percentile (bottom quintile) among the 34 bands that competed in the semi-finals and at the 36th percentile (2nd quintile) among those same 34 bands in the prelims.
Each point on this scatterplot represents a band, with the band’s preliminary score in individual music plotted on the x axis and the semi-final score on the y axis. If every band placed exactly the same among these 34 bands in both prelim and semi-final competition, all points would be plotted on the theoretical red line. Bands that placed higher in the semi-finals than they did in prelims would have a point above the line.
We would expect individual music scores to have a low agreement between any two judges simply because the judge for individual music is on the field. What the judge hears depends entirely on where he or she is standing when it happens. Furthermore, we don’t expect bands to get worse in individual music performance between Thursday afternoon, when the score was given for North Hardin in the prelims, and mid-afternoon Saturday, when they performed in semi-final competition.
It could happen, say, if many members fell ill on Friday and their performance suffered. However, as far as the performance itself goes, it is safe to assume that it didn’t change much. In any event, it probably would not have gotten worse.
We would not expect such high variability in the music ensemble scores, since the judge for that trait sits at a high vantage point, where he or she should be able to hear everything that happens in the music.
We expect music ensemble scores to be tighter based on the theoretical red line than individual music scores, mainly because of the placement of the judge, but that’s not what happened. North Hardin placed in the 52nd percentile among these 34 bands during the prelims and in the 55th percentile during semifinals, making judging look more consistent for them in music ensemble than it was for individual music, but for other bands, score differences stood out for music ensemble as well.
Each point plotted on the graph represents a band: 100 to 300 kids and thousands of dollars. To see an Excel spreadsheet of this analysis for all semi-final bands, save the target of this link, here, and open it with Microsoft Excel 2007.
One factor in the change in North Hardin’s individual music score could be a form of rater bias known as “scoring sequence bias.” For many scoring analysts, this is not a form of bias in itself but rather a side effect of what is known as “central tendency bias.”
This occurs when a rater, say, of your essay questions on a statewide standardized exam, gets a class of papers where only one student writes anything close to the correct answer. As he reads papers that are completely off-base, he keeps thinking to himself that they couldn’t possibly be that bad and starts thinking they are closer to the center of the score continuum than they really are.
Then, upon reading the only paper in the bunch that has a semblance of a clue, he tends to give it a higher score than it really deserves, simply because of the essays he read before it which he thought had to be closer to the middle—in other words, better—than they really were.
Depending on the band’s order in the performance, since judging marching bands is subject to the same human nature as scoring essay questions on standardized tests, we could see effects from the bands that came before it. This gives rise to the common expression, “They’re a tough act to follow.”
In preliminary competition, North Hardin performed 13th, behind 10 bands that did not make the semi-finals and two that took part in the semi-finals only because of their class. Then, North Hardin comes along, admittedly better in individual music, but the judge’s perception is distorted from having heard so many bands perform below the median in individual music that North Hardin sounds great. The judge perhaps gives them a higher score than they truly deserve.
Scoring sequence bias cuts both ways. During the semi-finals, on the other hand, North Hardin followed Marcus, American Fork, Lawrence Central, Carmel, Ben Davis, Broken Arrow, and The Woodlands, all of which received higher scores than North Hardin. Marcus, in fact, got the top score in semi-finals for individual music. Compared to the bands that came before North Hardin, the Trojan Band’s individual music score may have been the victim of scoring sequence bias. This would explain the significant drop in their placement in individual music between prelims and semi-finals.
To eliminate this bias, as much as possible, Bands of America could use a random performance order for semi-finals. However, patterns emerge, where bands performing early in the day during semi-finals get lower scores than those performing later in the day.
Consider general effect scores, for example. The first eight bands performing in the semi-finals were first, second, third, fourth, 14th, sixth, seventh, and eighth from the bottom during prelims in their general effect score. This gives the appearance of non-random selection, despite Bands of America’s claim (see Grand Nationals program, p. 28) that “Semi-finalist directors draw for performance times after Friday night’s awards ceremony.”
The probability of a random draw coming out with the lowest-scoring band first, etc., for the first eight bands out of 34, where the fifth position doesn’t matter, is about 1 in 27 billion. We note that Bands of America never used the word “random” in their description of how order is determined. If they had, the press would have serious questions for the organization to which our schools, our corporations, and our citizens send $6.2 million annually.