Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2016

Month Degree Awarded

May

First Advisor

Jennifer Randall

Second Advisor

Stephen G. Sireci

Third Advisor

Krista J. Gile

Subject Categories

Educational Assessment, Evaluation, and Research

Abstract

Concerns over fairness permeates every aspect of the testing enterprise, and one characterization of fairness in testing defined by the Standards (AERA, APA, & NCME, 1999) is a fairness as lack of bias. One important way to study bias in college admission context concerns the degree to which prediction equations are equivalent for different groups. To the extent that the AP variables are used together with admission test scores and previous academic records to predict future academic achievement, it is important to know if members of one group are systematically predicted to obtain lower or higher grades than they actually achieve on the average (Linn, 1990, p. 309). Many studies have investigated differential predictive validity for different groups using high school performance and admission test scores as predictors (Linn, 1990). To this day, minimal research attention has been directed toward differential predictive validity using Advanced Placement (AP) variables as predictors, although policy makers have begun to treat the AP experience as an additional important prerequisite for success in college (Breland et al., 2002). By examining the differential predictive ability of AP variables and controlling for predictor unreliability, we can better understand the extent to which these predictors are biased against particular groups. With this understanding, test users can be informed of the extent to which the inferences drawn from these variables are supported by strong validity evidence regarding fairness in admission. Against this backdrop, the current study examines whether AP exam scores predict the first year GPA and second year retention differently for different groups of ethnicity, gender, parental education level, and language group, controlling for high-school-level variables using hierarchical linear modeling.

DOI

https://doi.org/10.7275/8443298.0

Share

COinS