Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0001-7372-9757

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2020

Month Degree Awarded

September

First Advisor

Craig S. Wells

Subject Categories

Educational Assessment, Evaluation, and Research | Educational Methods

Abstract

One desirable property of a measurement process or instrument is the maximum invariance of the results across subpopulations with similar distribution of the traits. Determining measurement invariance (MI) is a statistical procedure in which different methods are used given different factors, such as the nature of the data (e.g. continuous, or discrete, completeness), sample size, measurement framework (e.g. observed scores, latent variable modeling), and other context-specific factors. To evaluate the statistical results, numerical criteria are often used, derived from theory, simulation, or practice. One statistical method to evaluate MI is multiple-group confirmatory factor analysis (MG-CFA) in which the amount of change in fit indices of nested models, such as comparative fit index (CFI), Tucker-Lewis fit index (TLI), and the root mean squared error of approximation (RMSEA), are used to determine if the lack of invariance is non-trivial. Currently, in the MG-CFA framework for establishing MI, the recommended effect size is a change of less than 0.01 in CFI/TLI measures (Cheung & Rensvold, 2002). However, the recommended cutoff value is a very general index and may not be appropriate under some conditions, such as dichotomous indicators, different estimation methods, different sample sizes, and model complexity. In addition, in determining the cutoff value, consequences to the lack of invariance have been ignored in the current research. To address these gaps, the present research undertakes to evaluate the appropriateness of the current effect size of ΔCFI or ΔTLI < 0.01 in educational measurement settings, where the items are dichotomous, the item response functions follow an item response theory (IRT) model, estimation method is robust weighted least squares, and the focal and reference groups differ from each other on the IRT scale by 0.5 units (equivalent to ±1 raw score). A simulation study was performed with five (crossed) factors: percent of differential functioning items, IRT model, IRT a and b parameters, and the sample size. The results of the simulation study showed that the cutoff value of a ΔCFI/ΔTLI < 0.01 for establishing MI is not appropriate for educational settings under the foregoing conditions.

DOI

https://doi.org/10.7275/19258798

Share

COinS