Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically evaluate the stopping rules in these tests that often have important instructional and/or placement implications for students. In this manuscript, we propose and research a framework for evaluating stopping rules with respect to two important and sometimes conflicting criteria: (1) efficiency, and (2) reliability. Using this framework, we provide an example in which we apply three increasingly complex methods for evaluating efficiency and two methods for examining reliability.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.