Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Education (EdD)

Degree Program

Education

Year Degree Awarded

2014

First Advisor

Craig Wells

Second Advisor

Stephen G. Sireci

Third Advisor

Aline Sayer

Subject Categories

Cognitive Psychology | Education

Abstract

Information integration theory (IIT) is a cognitive psychology theory that is primarily concerned with understanding rater judgments and deriving quantitative values from rater expertise. Since standard setting is a process by which subject matter experts are asked to make expert judgment about test content, it is an ideal context for the application of information integration theory. Information integration theory (IIT) was proposed by Norman H. Anderson, a cognitive psychologist. It is a cognitive theory that is primarily concerned with how an individual integrates information from two or more stimuli to derive a quantitative value. The theory focuses on evaluating the unobservable psychological processes involved in making complex judgments. IIT is developed around four interlocking psychological concepts: stimulus integration, stimulus valuation, cognitive algebra, and functional measurement (Anderson, 1981). The current study evaluates how IIT performs in an actual operational standard workshop across three different exams: HP storage solutions, Excelsior College nursing exam and the Trends for International Math and Science (TIMSS) exam. Each exam has cut scores set using both the modified Angoff method and the IIT method. Cut scores are evaluated based on Kane’s (2001) framework for evaluating the validity of a cut score by evaluating procedural, internal and external sources of validity evidence. The procedural validity for both methods was relatively comparable. Both methods took approximately about the same amount of time to complete. Raters for both methods felt comfortable with the rating systems and expressed confidence in their ratings. Internal validity evidence was evaluated through the calculation of reliability coefficients. The inter-rater reliabilities for both methods were similar. However, the IIT method provided data to calculate intra-rater reliability as well. Finally, external validity evidence was collected on the TIMSS exam by comparing cut score classifications based on the Angoff and IIT methods to other performance criteria such as teacher expectations of the student. In each case, the IIT method was either equal or outperformed the Angoff method. Overall, the current study emphasizes the potential benefits IIT could produce by incorporating the theory into standard setting practice. It provided industry standard procedural, internal and external validity data as well provided additional information to evaluate raters. The study concludes that IIT should be investigated in future research as a potential improvement to current standard setting methods.

DOI

https://doi.org/10.7275/5474959.0

Share

COinS