Date of Award


Document type


Access Type

Open Access Dissertation

Degree Name

Doctor of Education (EdD)

Degree Program

Education (also CAGS)

First Advisor

Stephen Sireci

Second Advisor

Aline Sayer

Third Advisor

Ronald Hambleton

Subject Categories



There is growing interest in alignment between state's standards and test content partly due to accountability requirements of the No Child Left Behind (NCLB) Act of 2001. Among other problems, current alignment methods almost entirely rely on subjective judgment to assess curriculum-assessment alignment. In addition none of the current alignment models accounts for student actual performance on the assessment and there are no consistent criteria for assessing alignment across the various models. Due to these problems, alignment results employing different models cannot be compared. This study applied item mapping to student response data for the Massachusetts Adult Proficiency Test (MAPT) for Math and Reading to assess alignment. Item response theory (IRT) was used to locate items on a proficiency scale and then two criterion response probability (RP) values were applied to the items to map each item to a proficiency category. Item mapping results were compared to item writers' classification of the items. Chi-square tests, correlations, and logistic regression were used to assess the degree of agreement between the two sets of data. Seven teachers were convened for a one day meeting to review items that do not map to intended grade level to explain the misalignment. Results show that in general, there was higher agreement between SMEs classification and item mapping results at RP50 than RP67. Higher agreement was also observed for items assessing lower level cognitive abilities. Item difficulty, cognitive demand, clarity of the item, level of vocabulary of item compared to reading level of examinees and mathematical concept being assessed were some of the suggested reasons for misalignment.


Included in

Education Commons