Online writing tests favor the users

One-to-one programs in which every student gets a computer have become the latest rage, and certainly there is good reason to use technology in the classroom. But a new study finds that high-performing students use computers much more proficiently than low-performing students do, which only serves to make scores between the highest- and lowest-performing online test-takers even farther apart than they would be if all students had taken the tests using paper and pencil or if all students taking the test online had the same opportunity to practice on computer.

This more detailed study of the 2012 writing pilot for the National Assessment of Educational Progress, or NAEP, sometimes called the Nation’s Report Card, found that average- and low-performing students constructed better sentences on paper than on the computer. Higher-performing students had a tendency to use spellcheck, backspace, and other editing tools more frequently and wrote longer essay responses on computer than on paper: 179 words per assignment, compared to 60 words for low performers.

The differences between the highest- and lowest-performing students were less pronounced when students took the tests using paper and pencils. That’s because high-performing students scored much higher on the computer than on paper, but low- and middle-performing students, in other words, 80 percent of all students, didn’t appear to benefit from using the computer.

Voxitatis has reported that math tests should not be given online and that technology actually gets in the way of students expressing themselves in mathematics. But this new study carries that over to writing tests as well and causes us to consider the possible effect using computers to take standardized tests may have on the achievement gap.

More important than the achievement gap, though, is the idea of fairness. Tests, especially those used for accountability purposes under federal law, must give every student the same opportunity to achieve each score point or reporting level.

The primary reason we were opposed to the use of computers for math tests was that it gave an advantage to kids who took the test using paper and pencil: they could draw pictures more easily, which is a prevailing strategy teachers use in our classrooms to teach, say, solving word problems. Scratch paper is good, but since the Common Core requires us to assess not only the correct answer but the procedure used to find it, including modeling, explanation of the problem-solving approach, and other communication-related concepts, scoring the tests without that scratch paper gives a distinct advantage, in terms of the number of points achieved, to students who take the test on paper.

This advantage, which is clearly given to frequent users of technology more than to students who use technology less frequently, has nothing to do with what the test purports to measure: differences in how well students understand mathematics or, in the present study, how well they write or construct English sentences. That necessarily means that the test is unfair in that not all students have an equal opportunity to demonstrate their understanding of the content, even though their level of understanding of that content may, in fact, be identical.

That is, suppose Student A and Student B both have a 60-percent understanding of how to construct an English sentence, which is one thing the NAEP writing test purports to measure. If the test were valid, reliable, and fair, both students would get the same score because their understanding of the subject is identical.

Then suppose Student A has computers in his home and uses computers frequently to write assignments for school. Student B has no computer in his home and always turns in his assignments on paper. If the writing test is given online, based on the present study, Student A is expected to get a higher score than Student B, which represents a completely inaccurate assessment of their understanding of the learning standards the test purports to measure.

The bottom line with our hypothetical writing test is that the results will indicate an achievement gap between Student A and Student B even though no such achievement gap exists. If Student A has a different teacher, goes to a different school, has a different skin color or socioeconomic status, the media and politicians will erroneously assume the achievement gap is tied to one of those variables. We’ll also believe that Student A understands the “content” on the test better than Student B, even though, again, that is an inaccuracy.

What this study shows is that the gap may also be attributed, at least in part, to Student B’s lower experience level with technology or online test environments.

Now we know that using computers for writing tests gives an advantage to kids who are more likely to have computers in their homes, such as rich kids, and gives a disadvantage to kids who are less likely to have computers in their homes, like poor kids.

About the Author

Paul Katula
Paul Katula is the executive editor of the Voxitatis Research Foundation, which publishes this blog. For more information, see the About page.