Introduction
Many commentators seek to define complexity in connection with complex interventions.1 Rogers distinguishes between simple, complicated and complex: simple is encapsulated in following a recipe, complicated by sending a rocket to the moon and complex in bringing up a child.2 In the first paper of this WHO series on Complex Interventions, Petticrew and colleagues turn the emphasis away from the activity itself (intervention or exposure) and towards the perspective adopted by the evaluator (in this case an ‘intervention perspective’ or a ‘systems perspective’).1 These perspectives offer alternative evaluation ‘lenses’ to be adopted by reviewers or guideline developers even when examining the same phenomenon. For example, when examining use of a safety checklist within operating theatres3 one could either adopt an ‘intervention perspective’ or ‘lens’ to consider issues that relate to implementation within a controlled setting (the theatre) or adopt a ‘systems perspective’ or lens to explore the wider organisational or system culture within which the checklist is being implemented (eg, within a culture of blame or of improvement).
When making decisions about complex interventions, guideline development groups need to take account of the sociocultural acceptability of the intervention, as well as how feasible the intervention will be to implement. Complex interventions are inextricably linked to context; interventions interact with, and sometimes change, the context within which they are implemented.1 Recognition that complex interventions are context-dependent not only holds implications for the effect of the intervention, but also for its sustainability, acceptability and feasibility. This paper examines implications of adopting a ‘systems perspective’, as opposed to an ‘interventions perspective’, when formulating questions to be addressed by qualitative research. As with the first paper in the WHO Complex Interventions series,1 it focuses on the first part of the evidence synthesis process, defining the question. This paper reflects on frameworks for structuring systematic review questions, informed by a rapid review of existing frameworks, to evaluate their suitability when exploring complex interventions. The paper proposes an alternative framework, the PerSPEcTiF framework, for further testing.
It is increasingly recognised that systematic reviews of effects do not adequately capture how or why the effects of complex interventions differ according to context.4 5 Decision makers are demanding different types of synthesis to provide such evidence. Qualitative evidence synthesis (QES), for example, increasingly contributes to recommendations from WHO and other guideline development processes.6 7 QES can provide evidence for diverse questions beyond those that typically relate to the feasibility and acceptability of complex interventions (see Box 1)4 8 9 QES can potentially provide rich data relating to the context of interventions, policies or conditions and the lived experiences, views and beliefs of those involved. However, typical question frameworks for QES do not adequately account for a complexity perspective,10 11 in particular they do not account for the presence and assimilation of multiple stakeholder perspectives or for the importance of contextual variation; critical if QES findings are to support holistic decision-making and if guidelines are to be applied with contextual sensitivity. As Squire and colleagues emphasise:
Complexity-related questions to be addressed in a qualitative evidence synthesis (QES)
Potential research questions for a QES
How do the components work along and in combination to produce effects?
How do they interact to produce outcomes?
How and why does the implementation of the intervention vary across contexts?
How does the system change when the intervention is introduced?
What are the effects (anticipated and unanticipated) which follow from this system change?
How do effects change over time? (Changes may relate to biological, ecological, epidemiological or social factors)
What explains how effectiveness of the intervention changes over time?
What factors enable or inhibit implementation of interventions?
What changes in processes and outcomes follow the introduction of this system change?
At what levels in the system are they experienced? (eg, individuals, families, communities)
To what extent do patients/beneficiaries value different health outcomes?
Is the intervention socioculturally acceptable to patients/beneficiaries as well as to those implementing it?
Is the intervention socioculturally acceptable to the public and other relevant stakeholder groups?
To what extent do patients/beneficiaries value different non-health outcomes?
How accessible—in terms of physical as well as informational access—is the intervention across different population groups?
What are the barriers and facilitators to implementing the intervention?
Such complexity … makes the task of formulating a good review question both more important and more difficult. Furthermore, given the expected heterogeneity, systematic review questions should go beyond simple effectiveness questions (eg, ‘does X work?’) to consider under what circumstances X works.10
Guideline development organisations, such as WHO, the National Institute for Health and Care Excellence in the UK and other members of the Guidelines International Network (G-I-N), need to develop guideline recommendations that are feasible and acceptable to those planning, providing, implementing or receiving care. In turn, guideline development requires systematic review methodologies that explore the complexity of interventions, the context in which they are implemented and, the emphasis of this paper, the lens or evaluation frame through which they are evaluated.12