Wednesday, May 14, 2025

Digital device choice affects test scores

-

Further research now underscores concerns testing experts have when it comes to scores obtained by students who take Common Core-aligned math and reading tests online, Education Week reports, citing assorted analyses from test providers and other organizations.


This is how kids do math. None of this is possible with the equation editor online tool.

Voxitatis reported several months ago that the scores of students who took Common Core-aligned tests in math, administered by the Partnership for Assessment of Readiness for College and Careers, or PARCC, online scored lower at the high school level than those who took the test on paper. We blamed the difference on the “equation editor,” a MathType derivative that requires students to select math symbols from numerous dropdown palettes and enter their math work in paragraph form, resembling neither how they learn about nor study nor complete homework nor understand mathematics in any way.

We quoted S James Gates, a physicist at the University of Maryland, a member of the Maryland State Board of Education, and a recipient of the National Medal of Science from President Barack Obama, as saying the equation editor was an inappropriate tool for doing the work of the PARCC tests in math.

Differences were not reported at all grade levels or for both subjects. Differences between online scores and scores on paper were most pronounced in high school math, but it wasn’t entirely clear that the differences could be attributed solely to the equation editor. That was just a hypothesis several people, including me, had shared over the past year or so.

Ultimately, the online tests were found to be mostly comparable to the paper tests, and we haven’t said much about the subject ever since. But one new report found negative “device effects” among students in Ohio who used tablets to take the tests. PARCC had originally excluded test scores from Ohio, where one out of about seven PARCC test takers lived, before computing the numbers, in effect declaring the entire state an outlier.

With the numbers from Ohio included in the new report, it’s now clear that students who took the PARCC tests on tablets scored an average of 10 points and 14 points lower, respectively, than those who took the tests on laptops or desktop computers.

Unlike the earlier studies, this one didn’t examine score differences between online test takers and paper test takers. That is, it doesn’t replicate the earlier work. It does, however, reinforce the findings obtained previously. The fact that kids who take a test on one device get lower scores than kids who take the same test on a different device is inconvenient, mainly because the type of device the kid uses has nothing whatsoever to do with either the kid’s understanding of mathematics at the appropriate grade level nor the quality of his or her teachers.

As you can read for yourself, the report found that “extensive evidence of device effects was observed on nearly every assessment” when the results from Ohio, the largest state in PARCC at the time, were included.

I stand by my earlier remarks: Students’ familiarity with online test-taking tools like the equation editor is completely beside the point. I don’t care if our students learn how to use some contrived MathType look-alike, which isn’t even a good way of writing mathematical expressions at the high school level, let alone a fair way of holding students, teachers, or schools accountable. It is, rather, an amateurish online tool that bears no resemblance whatsoever to math at the high school level.

Any attempt on the part of schools to divert the public’s attention away from their own failures in the online testing realm and toward a made-up need to familiarize students with an online test-delivery tool reveals a motive to defraud students and communities. Schools should use technology for many good reasons; these tests aren’t one of those reasons—at least until a whole lot more research comes in and tells us what we’re dealing with.

I mean, there are variables that have an impact on student test scores—socioeconomic status, headaches, and so on—even in states that don’t start with the letter ‘O.’ Federal law requires us to test students once in high school and give each student a “score” for performance in math. The use of inadequate technology to assess that performance has both reduced the rigor for the problems on the test in order to accommodate low-end answer-entry tools and forced us to face the fact that we’re measuring more variables than we thought (or told the public about) besides actual math performance. Add device use and familiarity with specific and underdeveloped test-delivery tools to the list of variables. And tell people what’s happening.

Paul Katulahttps://news.schoolsdo.org
Paul Katula is the executive editor of the Voxitatis Research Foundation, which publishes this blog. For more information, see the About page.

Recent Posts

A Budget That Puts Civil Rights on the Chopping Block

0
The administration's proposed 2026 budget would "streamline" special education funding and other streams, but what consequences would result?

Digital Harbor HS closed after vandalism