Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Education

Year Degree Awarded

2017

Month Degree Awarded

May

First Advisor

Michelle K. Hosp

Subject Categories

Special Education and Teaching

Abstract

A repeated measures study was conducted to determine the effects of test format on accuracy and fluency performance on a computer-based, nonsense word, decoding task. Decoding is a phonics skill that is highly predictive of overall reading performance (Fletcher, Lyon, Fuchs, & Barnes, 2007). Therefore, identifying students who are struggling with decoding and providing instruction to remedy skill deficits is of high importance to teachers. A possible way for teachers to determine the instructional needs of their students is through testing (Hosp & Ardoin, 2008). However, time dedicated to test completion in classrooms limits the time available for instruction. Therefore, it is prudent that testing practices are efficient, but still yield reliable and valid data that can be used to inform instructional decision-making. This study examined how test format may be a variable that might improve the efficiency of decoding tests. Fifty-three second grade students from a single elementary school in the northeast participated in this study. Participants completed a battery of decoding and reading tests. These included: A computer-based modified 100-word Nonsense Word Fluency (NWF) task that was formatted five ways, the DIBELS Next NWF benchmark, the Decoding Inventory for Instructional Planning - Screener (DIIP-S), DIBELS Next Oral Reading Fluency (ORF) benchmark, and the Group Reading and Diagnostic Evaluation (GRADE). Results from a series of repeated measures ANOVAs showed there are performance differences across test formats for both accuracy and fluency performance metrics. In addition, results show there are no performance differences across formats between student indicated preferred and non-preferred formats. Last, correlational analyses show there is evidence of criterion-related validity for each test format, but the strength of the evidence is dependent on test format, score metric, and criterion of interest. The effect of test format on student performance indicates test format is a potential variable in exploring ways to improve the efficiency of decoding testing practices. Results align with previous research on the effects of the number of words presented at a time affecting reading speed, as fluency scores were significantly higher on the formats with more words, but diverge from previous research on the effects of student motivation, as results in this study did not find any effect on student preference for format. Implications for test development and directions for future research are discussed.

DOI

https://doi.org/10.7275/9842952.0

Share

COinS