for Local Approaches to Measuring the Effectiveness of Whole-Person Initiatives
This section offers considerations for local educational agencies (LEAs) as they think about how to measure the effectiveness of whole-person initiatives. The considerations are informed by the practical expertise of the impact education partners, the findings and lessons learned from the demonstration projects and measurement profiles that emerged from the group convenings, and research related to implementing and measuring initiatives focused on well-being. The considerations are grouped into three overarching categories that emerged from this inquiry—equity-centered evaluation, data-based inquiry for continuous improvement, and balancing local and statewide infrastructure and alignment.
Consideration 1: Equity-Centered Evaluation
LEA evaluation practices should be guided by equity. Social, emotional, mental, and behavioral supports and services must be rooted in a social justice lens and in equitable and anti-racist practices to ensure that well-intentioned efforts do not perpetuate systems of oppression.
A. Balance local and statewide approaches to measurement. While statewide analyses are helpful for understanding the impact of systemwide investments, it is equally important to maintain measurement at the local level to account for context, culture, and needs that are relevant for students’ social and emotional well-being and mental health. Child and youth development is largely a function of the dynamic between children and youth and their environment (Garcia-Coll et. al., 1996). Therefore, evaluation of initiatives must account for the local conditions created for children and youth and the cultural context in which they live. Further, the evaluation should prioritize equity-focused outcomes and measures that build from the strengths, goals, and needs of the young people in the communities served. Evaluation must reflect and cultivate assets and promote the well-being of children and youth of all identities in order to create change (Jagers et al., 2019).
Statewide measures should be used to understand the impact and continuous improvement of statewide initiatives and should be informed by local measures. Locally determined outcomes and measures can help generate meaningful understanding of what is most important for the well-being of young people in different communities across the state, which can inform state-level policy and funding decisions, as well as the provision of differentiated technical assistance for LEAs. The Hawaii State Department of Education’s HĀ framework is a powerful example of a statewide framework grounded in local culture, values, and history.
B. Consider whose voice is missing. Statewide efforts should be made to gather, across every LEA, qualitative and experiential data that offer insight into how students, families, and educators experience California’s education systems. Those closest to the work should be a part of designing, implementing, evaluating, and refining the work. There are a variety of benefits to including all of these voices in the process. For instance, incorporating the voices of young people is critical to ensuring that the implementation of initiatives and interventions is responsive to their experiences. When these voices are included, strategies for serving the whole person are more authentic and relevant, improving the likelihood that they will be effective and sustainable. In addition, when young people have opportunities for leadership and partnership with adults, it supports their positive development by promoting social and emotional competencies such as agency, belonging, and identity (Jagers et al., 2019).
C. Apply a strengths-based approach to measurement and evaluation. Measurement and evaluation approaches should elevate local community assets and system strengths to inform continuous improvement efforts. This involves using strengths-based language in metric design, engaging in strengths-based inquiry, analyzing and sharing what is working (e.g., bright spots), and using data to scale promising practices.
Consideration 2: Data-Based Inquiry for Continuous Improvement
Efforts to understand the impact of whole-person initiatives should be grounded in varied data sources and should inform continuous system improvement.
A. Plan, collect, and analyze data on fidelity of implementation and balance fidelity measures with tailored, contextualized implementation measures. Data on the fidelity of an initiative’s implementation (also referred to as treatment integrity and implementation integrity) should be considered and collected over time to understand growth and opportunities for continuous improvement. Fidelity data can include measures of adherence, quality, and dosage. These data can be collected in myriad ways, including through checklists, logs, surveys, and rating scales. The Fidelity Integrity Assessment (FIA) and the LEA Self-Assessment (LEASA) are examples of fidelity measurement tools that have been used in California.
While some fidelity tools have already been evaluated for psychometric quality, it is likely that most initiatives will develop their own tools and approaches for ensuring implementation fidelity. Thus, specific plans for evaluating the psychometric quality of the fidelity tools should also be considered, including inter-observer agreement, rater bias, internal consistency, exploratory factor analysis, and other established approaches to measuring reliability and validity.
For implementation to be impactful, sustainable, and equitable, it must adapt to the context in which an initiative is being implemented. Accordingly, it is important to balance fidelity with tailored, contextualized implementation. Bopp and colleagues (2013) recommend a rigorous, yet flexible definition of implementation fidelity and completeness and suggest reexamining this definition at each phase based on formative and summative evaluation.
B. Identify measures that allow for local and statewide analysis. Evaluations should include locally collected data that are aligned with statewide measurement systems (e.g., measures on the California School Dashboard related to attendance and discipline). When state-supported initiatives share similar or aligned measures, it creates an opportunity for collaborative inquiry and comparative analyses across all levels of the system.
In addition, initiatives should have measurement instruments aligned with intended outcomes. When possible, these instruments should be standardized measures of the outcomes of interest or, if developed for the initiative, should include a plan for psychometric evaluation. It should also be noted that it is not always necessary to collect new data to identify outcomes. Using existing data, especially data that are integrated across sectors and systems, can reveal significant insights, help reduce data collection burdens on schools and providers, facilitate greater longitudinal assessment, and support continuous quality improvement.
C. Plan ahead for evaluation of impact. Local and state initiatives should consider a planned rigorous evaluation of impact before the initiative begins. Think ahead about how an initiative’s impact will be evaluated, so that the right implementation data can be gathered from the outset. Also keep in mind that the evolution of initiatives over time may generate a need for the design of new measures along the way.
Consideration 3: Balancing Local and Statewide Infrastructure and Alignment
It is necessary to align infrastructure and resources in order to support the effective measurement of whole-person initiatives.
A. Dedicate resources and infrastructure to support effective evaluations. Consistent with planning the methodological design, the resources and infrastructure to support effective evaluations should also be considered. The specifications for independent evaluations of statewide initiatives and investments are becoming more defined in statute. As these programs are designed and investments are considered, sufficient resources should be identified at the outset of the projects. For example, a percentage of the total cost of the program relative to the length of time of the evaluation should be factored into the design and implementation. Historically, statewide initiatives (e.g., MTSS) supporting the social, emotional, and health needs of students did not benefit from this planned dedication of resources—but the lessons learned from some of these long-standing initiatives could be used to inform the planning of future investments.
B. Leverage the California System of Support to ensure the alignment and coherence of California’s whole-person initiatives and related measurement efforts. The California System of Support is designed to support LEAs with knowledge- and capacity-building and continuous improvement through CoP and other technical assistance. Services and supports for LEAs include access to data, professional learning, and other technical assistance that support continuous system improvement. Such technical assistance includes the Transformative SEL Workgroup, the Community Engagement Initiative, the Equity Initiative, and the MTSS Initiative, all of which have conceptual frameworks that can help guide statewide measurement efforts. More recent whole-person investments—such as investments in community schools, UPK, and extended learning—should also inform future measurement efforts.
The System of Support can also connect statewide perspective with local context by providing a coherent understanding of policy, practice, and measures of impact, and offering capacity-building tailored to the needs of California’s diverse local contexts, strengths, aspirations, and needs. Differentiated technical assistance offered to LEAs through the System of Support can address the nuances of local evidence-based implementation and statewide measurement. The System of Support can also help practitioners engage in shared learning about approaches for measuring whole-person initiatives.