Search for a command to run...
Social prescribing involves trusted individuals in clinical and community settings identifying non-medical, health-related social needs and connecting people to community-based supports through a collaboratively developed social prescription. Social prescribing operates within a dynamic, complex adaptive system, making evaluation challenging. This study presents two Australian case studies of co-designed social prescribing programs-<i>Connect Local</i> in Melbourne and <i>Spark</i> in Adelaide-to examine how evaluation can be conceptualized and implemented. Rather than focusing on program delivery, these case studies are used to interrogate the processes, methodological challenges, and system conditions that shape how impact is understood. The evaluation challenges for both initiatives included shared complexities: the need to balance meaningful data collection with individual and community preferences; measuring impact to meet the needs of interest holders; and the evolution of the contexts in which the programs are delivered and their influence on what constitutes 'success.' Analyzing the two case studies against assumptions of linear, simple systems highlighted that a shift in how evaluation is conceptualized and undertaken is required. Impacts were not static, discrete, measurable outputs, but dynamic processes shaped by relationships, shared meaning-making, and adaptive capacity. Conventional evaluation frameworks centered on linear logic models and fixed indicators do not effectively capture impacts driven by relationships, community capacity, and adaptive change. Therefore, program success must be reframed as an emergent presence in which outcomes unfold through interactions between individuals, organizations, and wider systems. This study argues for a shift from milestone-based models to ongoing stewardship-oriented approaches that prioritize monitoring patterns, relationships, and adaptive responses. Indicators may need to shift from static quantitative measures to relational indicators that reflect relationship alignment, coherence of working practices, and growth within the networks and relationships. The question this research poses is: <i>How can evaluators identify and track indicators that remain meaningful</i> when <i>both the context and intervention are evolving, and thereby the outcomes are also changing?</i> By examining the evaluation journeys of <i>Connect Local</i> and <i>Spark</i>, this study demonstrates the need for methodological approaches that align with complexity, center on community voice, and explain the emergent, co-constructed nature of social connection impacts.
Published in: Frontiers in Public Health
Volume 14, pp. 1761023-1761023