Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Pretest item calibration within the computerized adaptive testing environment
An issue of primary concern for computerized adaptive testing (CAT) is that of maintaining viable item pools. There is an increased need for items within the CAT framework, which places greater demand on item calibration procedures. This dissertation addressed the important problem of calibrating pretest items within the framework of CAT. ^ The study examined possible ways to incorporate additional information available in the CAT environment into item parameter estimation, with the intent of improving accuracy of item parameter estimates. Item parameter estimates were obtained in a number of ways, including: using five different Bayesian priors, four sample sizes, two different fixed abilities for calibration, and two sampling strategies. All variables were compared in a simulation study. ^ Results for sample size were not surprising. As sample size decreased, the error in the estimates increased. Also as expected, fixing true abilities resulted in more accurate item parameter estimates than fixing an estimate of ability for calibration. Bayesian priors effected item parameter estimates differently, depending on the sampling strategy used. In the random pretesting strategy, more general priors produced the best results. For the focused pretesting strategy, the item-specific priors produced the best results if the priors were good, and the worst results if the priors were poor. When comparing results for the random and focused sampling strategies in terms of item difficulty, the random conditions produced slightly more accurate estimates than the focused conditions for the majority of items. However, the focused conditions produced much better estimates of item difficulty for very easy and very difficult items. The random conditions resulted in far more accurate estimates of item discrimination than the focused conditions. ^ In conclusion, the focused samples used in the study appear to have been too focused. Future research could investigate different ways of sampling examinees to ensure that sufficient variability is obtained for better estimation of item discrimination. Ways of translating judgmental information about items into numerical priors for estimation is another area in need of more study. Finally, an interesting and useful extension of this work would be to examine the effect of poor item parameter estimates on ability estimation. ^
Educational tests & measurements
Slater, Sharon Cadman, "Pretest item calibration within the computerized adaptive testing environment" (2001). Doctoral Dissertations Available from Proquest. AAI3000347.