This study replicates and extends the work of Powers, Fowles, Farnum, & Ramsey (1994) and Russell & Tao (2004) by examining the influence computer-print and handwriting have on raters’ scores. This replication study employs an experimental design that presents the same set of responses to raters in four different formats. A second experiment is conducted to explore the extent to which the presentation effect can be reduced by supplemental training that focuses specifically on the causes of this presentation effect and includes practice scoring of responses presented in different formats. As Powers et al. and Russell and Tao found, the first experiment indicates that responses to composition test items presented in handwritten form receive significantly higher scores than the same responses presented in computer-print form. This effect is due to the visibility of errors and higher expectations for computer-printed responses coupled with increased identity with the writer generated by handwriting. Through supplemental training, the presentation effect was eliminated. Accessed 24,952 times on https://pareonline.net from May 11, 2004 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Russell, Michael and Tao, Wei
"The Influence of Computer-Print on Rater Scores,"
Practical Assessment, Research, and Evaluation: Vol. 9, Article 10.
Available at: https://scholarworks.umass.edu/pare/vol9/iss1/10