For those of you that have your heads stuck under rocks, which apparently is the majority of test publishers, you have been MESSING UP.
That intro is in homage to Rebecca Martinson. If you don’t know who she is, hers is an incredible story about effective writing that has the unintended consequence of getting you booted out of a position, simply because it’s too good. This University of Maryland Delta Gamma sorority girl resigned after writing a stylish but inappropriate email message, which claimed her sisters were awkward and boring.
She didn’t write it to test publishers like Pearson and ETS, who share the question-writing portion of the contract with PARCC to develop tests that cover the Common Core, but I’ve wanted to write a letter in her style about these tests ever since I read her original letter. I, however, have been restricted to writing about test questions, which are bad enough, because to pay true homage to Ms Martinson, I really needed an entire test. No entire test seemed to provide a worthy example for her style—until now.
Also, as you probably suspect, she didn’t quite say “messing up” in the last part of the sentence. But since I can’t drop any F-bombs on this site, my rephrasing will have to do. Thanks, Rebecca, for teaching us all a few lessons, good and bad, about highly effective writing. Now to continue in my own style.

Carol Burris posted this test on her blog, here, and then wrote about it in the Washington Post, here. Ms Burris’s name should be more familiar to readers: she was named the 2013 High School Principal of the Year by the National Association of Secondary School Principals. Plus, she authored a letter against the use of test scores to evaluate teachers, which was signed by more than 1,500 New York principals.
This test, which purports to measure standards in the grade 1 Common Core, is an example of what’s wrong with the testing industry. People are showing bad tests like this one and saying the standards this test was written to test are bad. Rather, having grade 1 standards invites standardized testing about those standards, and test publishers—this test was developed by Pearson—haven’t figured out how to do it yet.
First of all, what teacher in her right mind puts a “45%” on a 6-year-old’s paper? This student knows that’s not good and, in this case, took the test home to show her mother, a teacher at Ms Burris’s school, who promptly discussed the matter with Ms Burris.
Second, take a look at the very first problem. When you see such a problem as the very first problem, you just know it’s going to be a “rough [expletive] ride” (thanks, Rebecca). Kids are told to “find the missing part.” Problem 1 shows five pennies and then a coffee cup. The label under the pennies is “part I know” and the label under the coffee cup, which is half-filled with coffee, is “whole.” The number 6 is written on the coffee cup. I don’t even know what I’m supposed to do.
How is it that a half-filled coffee cup can represent the “whole” of anything? Especially pennies. Is the cup supposed to hold six pennies instead of coffee, and that’s the missing part? Or maybe, the missing penny is in the coffee and kids are supposed to reach in there and pull it out. The question does ask them to “find” the missing part, but be sure to avoid third-degree burns as you follow the instructions. This is very confusing, but as you can see, the 6-year-old couldn’t figure it out either, although other evidence on the test suggests she knew perfectly well what 6–1 equals.
Rule #1 when it comes to writing multiple-choice items would be to make sure the kid can figure out the answer before reading the choices. That way, they can just look for the correct answer and bubble it in. Which brings up another thing. Why are we giving 6-year-olds fill-in-the-bubble tests anyway? Do you sense a Rebecca-style point coming up here? Many are possible, but I will resist. The writers of this test (and the editors and reviewers) seem to have abandoned elementary principles of test design, though.
One reason we may be giving 6-year-olds multiple-choice tests is that test publishers don’t know how to write any other questions, either. For example, problem 12 asks kids to write a story to illustrate the subtraction sentence “8 – = 2″ and draw a picture. So now we’re asking kids to make up the problems as well as coming up with the answers. That’s a useful skill. I can see how an employer in these young kids’ future will pay them good money for solving a problem like, “Write a report on all the types of problems we can solve in our business by writing Javascript programs.”
Oh, I so want to drop an F-bomb on this one! But another part of Ms Martinson’s letter may suffice: Test publishers, “I pity you because I don’t know how you got this far in life, and with that in mind don’t [expletive] show up unless you’re going to stop being a goddamn …” idiot when it comes to test design. You must realize that the test cannot possibly align to the standards if kids can’t figure out how to answer the questions! Knucklehead!
See, what happens when we tell kids things like “Use cubes to solve” in problem 3, when even I don’t know what that means—cube roots, cube as in multiplying a number by itself three times, or what—is we open up the whole Common Core for criticism. Nothing aligned to the Common Core should tell a kid what process to use in solving a problem, anyway, but that’s just another thing wrong with test questions like this.
If this test is truly aligned to the Common Core standards—it’s not, by the way, and it’s a very bad test—people are right to criticize the Common Core. One test doesn’t make a curriculum, just as one item doesn’t make a test in PARCC, but if too many bad ones get into the mix, the overall structure—be it a curriculum, a test, or a set of standards—is invalid.











