By Stephen McIntyre
Complex evaluations for complex interventions
Many interventions in health research can be considered complex, involving numerous interacting components, target behaviours, organisational levels, and outcomes. To ensure that we evaluate interventions appropriately, evaluations must reflect this complexity. Though randomised controlled trials (RCTs) are considered the ‘gold standard’ of evaluating health interventions, increasingly the need to understand the processes and mechanisms through which interventions operate has been recognised. Without this information, it may not be possible to interpret trial outcomes and potentially effective interventions may be discarded prematurely.
Process evaluations address these concerns by investigating how, why, and for whom interventions are effective or ineffective. Recently, process evaluation research has thrived with the aim of opening the ‘black box’ of intervention research (i.e. understanding how interventions produce their effects). To assist researchers and policy makers in this task, the Medical Research Council (MRC) recently released guidance on the conduct and reporting of process evaluations of complex interventions.
Key process evaluation functions according to MRC guidance
Process evaluations are used to assess a variety of factors including:
- Intervention fidelity: The extent to which an intervention is implemented as intended.
- Reach: The extent to which participants come into contact with an intervention.
- Dose: The amount of an intervention that is delivered.
- Acceptability: The perceived appropriateness of an intervention from the provider and/or recipient perspective. For a more detailed discussion of this concept, look out for a forthcoming paper from City, University of London’s Mandy Sekhon on the acceptability of healthcare interventions.
- Barriers and enablers to implementation: A number of comprehensive frameworks specify a range of factors (e.g. cognitive, behavioural, organisational) that may act as implementation barriers and enablers, including the Theoretical Domains Framework and the Consolidated Framework for Implementation Research.
- Contextual factors: “The environment or setting in which the proposed change is to be implemented” or anything that is external to an intervention.
Ultimately, a comprehensive understanding of these factors may provide evidence to inform if an intervention can be sustained, scaled-up, or transferred to different populations or contexts successfully.
Pick and mixed-methods
Given the range of potential aims associated with process evaluations, it’s not surprising that they are associated with a variety of methods. Reflecting their exploratory nature, process evaluations often incorporate qualitative methods to investigate how interventions are understood and how implementation and contextual processes are perceived. Quantitative methods may focus on quantifying and gaining insights on these issues in larger samples, measuring key process variables, or testing hypothesised causal pathways. Commonly used qualitative methods include interviews, focus groups, and researcher-conducted observations; while quantitative methods may include questionnaires and secondary analysis of routine data.
Setting the twittersphere alight with #mixedmethodsmemes
Although multiple and mixed methods are often used in process evaluations, they are not always integrated or are only combined in a limited manner. Tonkin-Crine et al. (2016) provide an example of a well-integrated mixed-methods process evaluation that was conducted alongside a trial to promote prudent antibiotic prescribing in general practice. They develop a comprehensive analysis by triangulating data from semi-structured interviews and questionnaires conducted with both patients and healthcare professionals. By comparing key findings across analyses, they examine areas of agreement and disagreement across datasets, as well as findings that are present in one but absent in another dataset.
Process evaluation frameworks and approaches
As well as the MRC framework on the conduct and reporting of process evaluations, a number of reporting and evaluation frameworks exist that offer alternative or complementary approaches to conceiving and conducting evaluation efforts. A recent framework devised by Grant et al. (2013) provides specific guidance on designing and reporting process evaluations conducted alongside cluster-RCTs. It highlights considerations such as: cluster and target population processes (e.g. intervention delivery, responses to the intervention), maintenance of intervention processes, associations between trial processes and effectiveness, unintended consequences, theoretical explanations for change, and context.
Theory-driven evaluation is another approach to process evaluation that explicitly examines the programme theory of an intervention (i.e. the explanation of how the intervention is expected to function and a description of what activities need to occur before others). For example, realist evaluation suggests that complex interactions between mechanisms (i.e. the resources offered by an intervention and how they influence participants’ reasoning) and contexts (e.g. social, political, historical) produce intervention outcomes.
Meanwhile, theory of change is a methodology that emphasises stakeholder involvement throughout the evaluation process. In this approach, a highly specified diagram is typically constructed that specifies the individual steps that need to occur for an intervention to achieve its intended goals as well as the assumptions underlying these changes.
My team’s work
My team are involved in the AFFINITIE programme – a project aiming to develop and evaluate two theory and evidence informed feedback interventions to promote evidence-based blood transfusion practice. The interventions are being evaluated in two pragmatic, cluster-RCTs and parallel process evaluations. The AFFINITIE process evaluations will assess intervention fidelity multidimensionally according to five fidelity domains proposed by the Behaviour Change Consortium: study design, training, delivery, receipt, and enactment. We’re collecting a range of data including semi-structured interviews, questionnaires, and web analytics data (e.g. downloads and usage of online intervention content). Planned mediation analyses will also examine associations between observed fidelity and trial outcomes.
Process evaluation resources
Process evaluation holds promise to open the ‘black box’ of intervention research but it remains an emerging field with plenty of opportunities for advancement. For those interested in learning more about process evaluations or my team’s work, here are some resources you may want to check out:
- Guidance on the design, conduct, and reporting of process evaluations from the Medical Research Council and Grant et al. (2013).
- Research from the Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer). Graham Moore, author of the recent MRC process evaluation guidance, leads their programme on complex intervention evaluation methods.
- The Center for Theory of Change website, which provides information relating to designing and conducting theory of change projects.
- Our AFFINITIE process evaluation protocol that’s just been published in Implementation Science!