A. Plan, collect, and analyze data on fidelity of implementation and balance fidelity measures with tailored, contextualized implementation measures. Data on the fidelity of an initiative’s implementation (also referred to as treatment integrity and implementation integrity) should be considered and collected over time to understand growth and opportunities for continuous improvement. Fidelity data can include measures of adherence, quality, and dosage. These data can be collected in myriad ways, including through checklists, logs, surveys, and rating scales. The Fidelity Integrity Assessment (FIA) and the LEA Self-Assessment (LEASA) are examples of fidelity measurement tools that have been used in California.
While some fidelity tools have already been evaluated for psychometric quality, it is likely that most initiatives will develop their own tools and approaches for ensuring implementation fidelity. Thus, specific plans for evaluating the psychometric quality of the fidelity tools should also be considered, including inter-observer agreement, rater bias, internal consistency, exploratory factor analysis, and other established approaches to measuring reliability and validity.
For implementation to be impactful, sustainable, and equitable, it must adapt to the context in which an initiative is being implemented. Accordingly, it is important to balance fidelity with tailored, contextualized implementation. Bopp and colleagues (2013) recommend a rigorous, yet flexible definition of implementation fidelity and completeness and suggest reexamining this definition at each phase based on formative and summative evaluation.
B. Identify measures that allow for local and statewide analysis. Evaluations should include locally collected data that are aligned with statewide measurement systems (e.g., measures on the California School Dashboard related to attendance and discipline). When state-supported initiatives share similar or aligned measures, it creates an opportunity for collaborative inquiry and comparative analyses across all levels of the system.
In addition, initiatives should have measurement instruments aligned with intended outcomes. When possible, these instruments should be standardized measures of the outcomes of interest or, if developed for the initiative, should include a plan for psychometric evaluation. It should also be noted that it is not always necessary to collect new data to identify outcomes. Using existing data, especially data that are integrated across sectors and systems, can reveal significant insights, help reduce data collection burdens on schools and providers, facilitate greater longitudinal assessment, and support continuous quality improvement.
C. Plan ahead for evaluation of impact. Local and state initiatives should consider a planned rigorous evaluation of impact before the initiative begins. Think ahead about how an initiative’s impact will be evaluated, so that the right implementation data can be gathered from the outset. Also keep in mind that the evolution of initiatives over time may generate a need for the design of new measures along the way.