•  
  •  
 

DOI

https://doi.org/10.7275/p85x-c908

Abstract

In this study, we describe an approach to calculating the standard errors of weighted scores while maintaining a link to the IRT score metric, then use the approach to compare three sets of weights. Weighting a mathematics test’s multiple-choice items, short-answer items, and extended constructedresponse items to achieve a ratio of 2:2:6 on the raw score metric had little effect on examinee scores or standard errors. Ratios of 3:3:4 and of 1:1:8 required more extreme weights and had a slightly larger, but still small, effect on results, increasing the standard errors. Overall, as the difference between the intended emphasis and the test’s design increased, the effect of the weighting also increased. Accessed 23,240 times on https://pareonline.net from June 25, 2004 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS