Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2019

Month Degree Awarded

February

First Advisor

Craig S. Wells

Second Advisor

Ronald K. Hambleton

Third Advisor

Stephen G. Sireci

Fourth Advisor

Malcolm I. Bauer

Subject Categories

Educational Assessment, Evaluation, and Research

Abstract

Learning progressions provide potentially valuable information to teachers about how to develop a scope and sequence for a group of learning objectives. However, for the learning progressions to be valuable, the progressions must be supported. Although there are several approaches and models that can be used to evaluate the validity of a learning progression, there is a dearth of research examining the advantages and limitations of each approach. The purpose of this study was to examine a multi-dimensional item response theory model and two cognitive diagnostic models (DINA and HO-DINA) for evaluating two learning progressions via a simulation study. In addition, the models were applied to empirical data to determine if the models provided consistent results. The results from the investigation indicated that five methods of using the model and statistical methods derived from them to testify learning level order could complement each other. None of the methods worked dominantly better than the others but they all deemed useful in certain contexts. With respect to assessing the possible links among levels across progressions, the degree to which the model recovered the true information in the simulation studies varied depending on the model and the magnitude of the difference between the learning levels. The more distant the levels were, the more accurate the model became at recovering the true classification. For the empirical analysis, three models provided convergent evidence to support almost all the aspects of the theory underlying two progressions considered in this study. Statistical results also suggested a few revisions to make the theory more in line with the empirical evidence. Four limitations were discussed, and six future directions were elaborated to address the drawbacks of this study. Finally, three practical implications were presented as take-away messages from this dissertation.

DOI

https://doi.org/10.7275/13487876

Share

COinS