Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Validity issues in standard setting
Standard setting is an important yet controversial aspect of testing. In credentialing, pass-fail decisions must be made to determine who is competent to practice in a particular profession. In education, decisions based on standards can have tremendous consequences for students, parents and teachers. Standard setting is controversial due to the judgmental nature of the process. In addition, the nature of testing is changing. With the increased use of computer based testing and new item formats, test-centered methods may no longer be applicable. How are testing organizations currently setting standards? How can organizations gather validity evidence to support their standards? ^ This study consisted of two parts. The purpose of the first part was to learn about the procedures credentialing organizations use to set standards on their primary exam. A survey was developed and mailed to 98 credentialing organizations. Fifty-four percent of the surveys were completed and returned. The results indicated that most organizations used a modified Angoff method, however, no two organizations used exactly the same procedure. In addition, the use of computer based testing (CBT) and new item formats has increased during the past ten years. The results were discussed in terms of ways organizations can alter their procedures to gather additional validity evidence. ^ The purpose of the second part was to conduct an evaluation of the standard-setting process used by a state department of education. Two activities were conducted. First, the documentation was evaluated, and second, secondary data analyses (i.e., contrasting groups analysis and cluster analysis) were conducted on data made available by the state. The documentation and the contrasting groups indicated that the standards were set with care and diligence. The results of the contrasting groups, however, also indicated that the standards in some categories might be a bit high. In addition, some of the score categories were somewhat narrow in range. The information covered in this paper might be useful for practitioners who must validate the standards they create. ^
Education, Tests and Measurements
Kevin Charles Meara,
"Validity issues in standard setting"
(January 1, 2001).
Electronic Doctoral Dissertations for UMass Amherst.