Loading...
Thumbnail Image
Publication

EVALUATION AND REMEDIATION OF RATER SCORING OF CONSTRUCTED RESPONSE ITEMS

Abstract
The primary focus of this study is the impact of variation in rater scoring of constructed response items for credentialing exams used for licensure or accreditation in a professional endeavor. In this type of exam, a candidate may be asked to write in detail about a legal opinion, an auditing report, or a patient diagnosis (just to name a few examples), and a rater (often a professional from the field) is responsible for evaluating the response (Raymond, 2002). Unfortunately, it is impossible for a rater, even one who is well trained, to make such judgments without some amount of error. That error, over the course of an exam, may even out or be managed, having little or no effect on the accuracy of a candidate’s pass or fail outcome. On the other hand, it is possible such error may keep some candidates from attaining credentialing for which they are qualified or allow licensure for those who are not. This research uses simulated data to analyze the impacts on the accuracy and consistency of pass / fail decisions made using various methods of identifying and mitigating poor ratings or raters.
Type
openaccess
article
dissertation
Date
Publisher
Advisors
Rights
License
Research Projects
Organizational Units
Journal Issue
Embargo
Publisher Version
Embedded videos
Collections