Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.
Author ORCID Identifier
Open Access Dissertation
Doctor of Philosophy (PhD)
Year Degree Awarded
Month Degree Awarded
Lisa A. Keller
Educational Assessment, Evaluation, and Research
Score reporting is an extremely important and yet often neglected component of large-scale assessment programs. One element of score reporting that frequently leads to misunderstanding is the interpretation of performance levels. One way to help define performance levels is through the use of "exemplars." Exemplars are test items that are supposed to best characterize each performance level. In this study, a Monte Carlo simulation was conducted to examine the performance of two item-mapping methods and different criteria for identifying exemplars under several simulated conditions.
The results of the study were neither clear nor systematic across all conditions and performance levels; however, there were a few findings. Using a discrimination criteria in addition to using RP alone, improved the false positive rate results for both tests. The converse was true, however, for the true positive rate results. Results showed that using a discrimination criterion in addition to using RP alone, decreased the true positive rates. With respect to both true positive and false positive rates, results under the normal distribution condition appeared better than under the skewed distribution condition for the Empirical-based method but no clear patterns were observed between the two distributions for the Model-based method, suggesting that the Model-based method may be less susceptible to changes in the shape of the distribution than the Empirical-based method.
The study suggests that several factors should be considered when choosing item-mapping methodology for the purposes of identifying potential exemplars: number of exemplars desired, distribution of item difficulty across scale, shape of ability distribution, and resources available for content specialists to subsequently review the potential exemplars.
Karantonis, Ana, "Using Exemplar Items to Define Performance Categories: A Comparison of Item Mapping Methods" (2017). Doctoral Dissertations. 1101.