A Framework for Evaluating Stopping Rules for Fixed-Form Formative Assessments: Balancing Efficiency and Reliability
DOI
https://doi.org/10.7275/16754500
Abstract
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically evaluate the stopping rules in these tests that often have important instructional and/or placement implications for students. In this manuscript, we propose and research a framework for evaluating stopping rules with respect to two important and sometimes conflicting criteria: (1) efficiency, and (2) reliability. Using this framework, we provide an example in which we apply three increasingly complex methods for evaluating efficiency and two methods for examining reliability.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Recommended Citation
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; and Ketterlin-Geller, Leanne R.
(2020)
"A Framework for Evaluating Stopping Rules for Fixed-Form Formative Assessments: Balancing Efficiency and Reliability,"
Practical Assessment, Research, and Evaluation: Vol. 25, Article 8.
DOI: https://doi.org/10.7275/16754500
Available at:
https://scholarworks.umass.edu/pare/vol25/iss1/8