Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

https://orcid.org/0000-0002-5337-3682

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2021

Month Degree Awarded

September

First Advisor

Jennifer Randall

Second Advisor

Lisa Keller

Third Advisor

Ian Barron

Subject Categories

Applied Statistics | Educational Assessment, Evaluation, and Research | Educational Methods | International and Comparative Education | Social Statistics | Statistical Methodology

Abstract

Between the United States and Great Britain, over 30 billion USD was spent in 2018 on international aid, over a billion of which is dedicated to education programs alone. Recently, there has been increased attention on the rigorous evaluation of aid-funded programs, moving beyond counting outputs to the measurement of educational impact. The current study uses two methodological approaches (Generalizability (Brennan, 1992, 2001) and Rasch Measurement Theory (Andrich, 1978; Rasch, 1980; Wright & Masters, 1982) to analyze data from math and literacy assessments, and self-report surveys used in an international evaluation of an educational initiative in the Democratic Republic of the Congo. These approaches allow the researcher to identify and select pertinent facets and look at them in relation to one another, allowing us to attribute smaller or larger sources of variability to a particular facet, and using both provides additional insight to instrument development and validation efforts. A thorough analysis of five Early Grades Reading Assessment subtasks, five Early Grades Mathematics Assessment subtasks, and three sets of items from a survey administered to the girls in the study was completed. Results suggest that two factors were consistently flagged as contributing to error in the outcome measures: enumerators and language of administration/girl’s home language. The results of this study provide implications for several phases of evaluations of educational initiatives in developing countries: evaluation design development; the importance of a pilot in assisting in refining the design and sampling plan; and the importance of selecting the appropriate outcome measure, particularly in projects utilizing payment for success models. The results also indicate the utility and complementary nature of using Generalizability and Rasch Measurement Theory analytic procedures in assessing the quality of complex evaluation data. Evaluations such as the one used in this study are highly complex in nature, with more possible sources of error than those included in the current study. What these results indicate is that though there is a wish to standardize and assess in difficult settings, the fact that context affects not only the results of assessments like the EGMA and EGRA, but their utility, cannot be ignored.

DOI

https://doi.org/10.7275/24148853

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS