Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2018

Month Degree Awarded

September

First Advisor

Lisa Keller

Subject Categories

Quantitative Psychology

Abstract

The primary focus of this study is the impact of variation in rater scoring of constructed response items for credentialing exams used for licensure or accreditation in a professional endeavor. In this type of exam, a candidate may be asked to write in detail about a legal opinion, an auditing report, or a patient diagnosis (just to name a few examples), and a rater (often a professional from the field) is responsible for evaluating the response (Raymond, 2002). Unfortunately, it is impossible for a rater, even one who is well trained, to make such judgments without some amount of error. That error, over the course of an exam, may even out or be managed, having little or no effect on the accuracy of a candidate’s pass or fail outcome. On the other hand, it is possible such error may keep some candidates from attaining credentialing for which they are qualified or allow licensure for those who are not. This research uses simulated data to analyze the impacts on the accuracy and consistency of pass / fail decisions made using various methods of identifying and mitigating poor ratings or raters.

DOI

https://doi.org/10.7275/12678290

Share

COinS