We developed a six-step iterative process for developing and evaluating a model of implementation fidelity appropriate for use in an instructionally embedded assessment system. Our work explicitly connects the literature on theories of actions for assessment systems with the implementation fidelity literature originating from the program evaluation field. The steps include (a) developing a logic model identifying critical and optional implementation components; (b) identifying process data and indicators from the assessment system to represent each component; (c) developing hypotheses about expected patterns in the indicators representing different levels of implementation fidelity and identifying criteria for defining implementation levels; (d) conducting analyses to test the hypotheses; (e) using the results to refine the indicators and criteria; and (f) evaluating strength of the evidence and identifying gaps. This process facilitates measuring action mechanisms and making and testing hypotheses about how critical implementation components are related to intended outcomes of an assessment. Studying implementation fidelity for assessment systems can help us better understand how teachers use assessment results and where additional support may be needed. This work can also help evaluate the extent to which instructionally embedded or formative assessments are implemented as intended and that all students are provided with sufficient opportunity to demonstrate what they have learned.
Kobrin, Jennifer L.; Karvonen, Meagan; Clark, Amy; and Thompson, W. Jake
"Developing and Refining a Model for Measuring Implementation Fidelity for an Instructionally Embedded Assessment System,"
Practical Assessment, Research, and Evaluation: Vol. 27, Article 24.
Available at: https://scholarworks.umass.edu/pare/vol27/iss1/24