An opinion piece in USA Today warns us that, because standards had dropped so much under No Child Left Behind, we’re in store for a rude awakening when scores are reported for US students from tests based on the Common Core over the next few weeks or months.
Michael J Petrilli and Robert Pondiscio write, bluntly:
Most states in the past set a very low bar. They “juked the stats.” The result was a comforting illusion that most of our children were on track to succeed in college, carve out satisfying careers, and stand on their own two feet.
To put it plainly, it was a lie. Most states set absurdly low academic standards before the Common Core, and their tests were even worse. In some cases children could even randomly guess the answers and be all but guaranteed to pass. Imagine being told year after year that you’re doing just fine, only to find out when you apply for college or a job, that you’re simply not as prepared as you need to be.
They have it right for the most part. In order to meet requirements for Adequate Yearly Progress, or AYP, as defined in federal law, states lowered the bar on their tests. They did this because students in 2009 weren’t learning any better than students in 2003 were, but they had to show “improvement.” The only way to do that, in the real world of US education, is to keep lowering the standards so that a higher percentage of kids meet the new but lower standard.
When the Common Core came out in 2010 and was tested beginning last year in both Maryland and Illinois, the tests from the Partnership for Assessment of Readiness for College and Careers, or PARCC, were much more difficult than the former tests used for reading and math in both states. Our data-hungry schools are going to be shocked that, probably, fewer than half of our students achieved a passing score on the PARCC tests. When California released their results today, people in the state found out “34 percent of California’s students met achievement targets in math, and 44 percent met achievement targets in English language arts.”
Compare that to the ISAT or MSA, which had several schools, especially those in more affluent neighborhoods, showing passing rates that approached 100 percent of students the year before we started giving the PARCC tests.
Naturally our kids are going to get the sense that they’re failing, and you can argue, as Messrs Petrilli and Pondiscio do, that parents and schools should swallow this bitter pill because “without an accurate diagnosis, you can’t get well. Talk to your child’s teachers as soon as possible and make a game plan for getting them extra help at home and at school.”
That is, on a scale from 1 to 5 with 5 being the best and 3 being “passing,” your kid, who always got good grades, is going to come home with a 2. And what you should do, these writers suggest, is call your kid’s teacher right away to find out what you can do to get your kid on track.
I imagine many parents will do that, but trust me, it’s a dead end.
First of all, the data just don’t mean what testing companies say it means. Many things besides reading and math contribute to a child’s overall preparedness for life, college, the workforce, and so on. Take those other factors, whatever they may be for your kid, into account.
Then, find out if you can help. Schools are having a tough time implementing the Common Core standards in reading and math, and perhaps teachers could use a little volunteer help.
And finally, please don’t let your child get miserable over this. He or she isn’t alone with their lower scores. Plus, it’s a complete waste of time and effort to get a bunch of numerical data and think it tells us how successful we are or—heaven forbid—how “happy” our kids are.
“How are we collecting data on happiness?” asks Josh Stumpenhorst, the 2012 Illinois Teacher of the Year and a middle school social studies teacher in Naperville. “Are we collecting data on physical and mental health? What sort of decisions are being driven by students’ joy and wellness in our schools? Are we creating intervention plans or pulling resources to support mental health as well as physical health?”
Probably we’re not, yet I would argue those data are infinitely more valuable than reading or math scores from a single test, a test based on questions that were field tested over a one-year period using kids who had no vested interest in the results (How do you think that went?).
Then, he asks, even if we did have data on kids’ happiness or well-being, “Is it at the same level as it is with math and reading? How can we obsess over a child’s reading scores when they’re hungry or struggling with obesity? Why are we worried about whether they will master math standards when they are clearly not well either physically or emotionally?”