Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2018

Month Degree Awarded

May

First Advisor

Lisa A.Keller

Second Advisor

Craig Wells

Third Advisor

Anna Liu

Subject Categories

Education | Educational Assessment, Evaluation, and Research

Abstract

In licensure testing programs, some examinees might attempt the test multiple times till they are satisfied with their final score, those who take the same test repeatedly are referred to as repeaters. Previous studies suggested that repeaters should be removed from the total sample before implementing equating procedures for two reasons: 1) repeater group is distinguishable from the non-repeater group and the total group, 2) repeaters may memorize anchor items and cause an item drift in common items in the non-equivalent anchor test (NEAT) design. However, removing repeaters might not be the best solution if the testing program only has a small number of examinees (e.g., teaching licensure tests with 20-30 examinee per test form). Excluding repeaters may cause an even smaller sample size and results in high bias and errors (Kolen and Brennan, 2014). Additionally, the population invariance property might not hold because of the differences between total sample group and repeater group. Therefore, three solutions were purposed to deal with repeaters effects in the current study, they are: 1) excluding repeaters, 2) including repeaters but removing problematic anchor items, 3) applying Rasch equating to capitalize on the invariance property. The main purpose was to investigate which solution(s) can mitigate the negative repeater effects. The secondary purpose was to compare identity equating, nominal weight equating, circle-arc equating and Rasch equating with small, medium to large sample size levels on a mixed-format test. The data generation was manipulated by repeater ability levels, repeater proportions, the drift in anchor test due to prior exposure and sample size levels. Both purposes were evaluated by equating bias, equating errors and population invariance measures. Furthermore, the practical implications were discussed based on the accuracy of pass/fail decision. Lastly, the recommendations regarding appropriate repeater effects solutions and small sample equating techniques were made based on given test conditions. The most important finding reveals the performance of repeater effect solutions and small-sample equating techniques highly depend on the anchor test. If the anchor was not drifted, retaining all repeaters can provide higher equating accuracy and decision accuracy than excluding repeaters. However, if anchor test was problematic and drifted due to exposure. Using circle-arc equating and identity equating or removing repeaters can significantly prevent high equating bias. Finally, the study recommends removing repeaters if the drift is unknown. At the small sample size levels (i.e., N =20 and N =50), identity equating had the most satisfactory performance. At higher sample size levels, circle-arc equating provided the most stable equating results while nominal weight mean equating can minimize the violation to invariance property of equating. Rasch equating, however, is not applicable to size levels smaller than 300.

DOI

https://doi.org/10.7275/11917158.0

Share

COinS