Article Text

A guide to systems-level, participatory, theory-informed implementation research in global health
  1. Nadine Seward1,
  2. Charlotte Hanlon2,3,
  3. Saba Hinrichs-Kraples4,
  4. Crick Lund5,6,
  5. Jamie Murdoch7,
  6. Tatiana Taylor Salisbury5,
  7. Ruth Verhey8,
  8. Rahul Shidhaye9,
  9. Graham Thornicroft5,
  10. Ricardo Araya5,
  11. Nick Sevdalis10
  1. 1Centre for Implementation Science, Department of Health Service and Population Research, King's College London, London, UK
  2. 2Institute of Psychiatry, Psychology and Neuroscience, Health Service and Population Research Department, Centre for Global Mental Health, King's College London, London, UK
  3. 3Centre for Innovative Drug Development and Therapeutic Trials for Africa (CDT-Africa), College of Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia
  4. 4Delft University of Technology, Delft, The Netherlands
  5. 5King's College London, London, UK
  6. 6University of Cape Town, Rondebosch, South Africa
  7. 7University of East Anglia Faculty of Medicine and Health Sciences, Norwich, UK
  8. 8Research Support Centre, College of Health Sciences, University of Zimbabwe, Harare, Zimbabwe
  9. 9Pravara Institute of Medical Sciences, Loni, Maharashtra, India
  10. 10Health Service & Population Research Department, King's College London, London, UK
  1. Correspondence to Dr Nadine Seward; nadine.seward{at}kcl.ac.uk

Abstract

Implementation research is a multidisciplinary field that addresses the complex phenomenon of how context influences our ability to deliver evidence-informed healthcare. There is increasing realisation of the importance of applying robust implementation research to scale-up life-saving interventions that meet health-related sustainable development goals. However, the lack of high-quality implementation research is impeding our ability to meet these targets, globally. Within implementation research, theory refers to the proposed hypothesis and/or explanation of how an intervention is expected to interact with the local context and actors to bring about change. Although there is increasing interest in applying theory to understand how and why implementation programmes work in real-world settings, global health actors still tend to favour impact evaluations conducted in controlled environments. This may, in part, be due to the relative novelty as well as methodological complexity of implementation research and the need to draw on divergent disciplines, including epidemiology, implementation science and social sciences. Because of this, implementation research is faced with a particular set of challenges about how to reconcile different ways of thinking and constructing knowledge about healthcare interventions. To help translate some of the ambiguity surrounding how divergent theoretical approaches and methods contribute to implementation research, we draw on our multidisciplinary expertise in the field, particularly in global health. We offer an overview of the different theoretical approaches and describe how they are applied to continuously select, monitor and evaluate implementation strategies throughout the different phases of implementation research. In doing so, we offer a relatively brief, user-focused guide to help global health actors implement and report on evaluation of evidence-based and scalable interventions, programmes and practices.

  • health policies and all other topics
  • health systems
  • health systems evaluation
  • public health

Data availability statement

Not applicable.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary box

  • Despite the evidence that highlights the need for implementation research to achieve the United Nation’s Sustainable Development Goals, global health researchers often do not incorporate core implementation components in their work systematically.

  • This guide aims to address the lack of guidance describing how to apply a range of theoretical approaches and methods that address specific questions in the context of implementation research for global health.

  • In doing so, we aim to offer a relatively brief, user-focused guide to help multidisciplinary global health groups design, implement and report on evaluation of evidence-based and scalable interventions, programmes and practices.

  • Improving the quality of implementation research in global health cannot be achieved without decolonising the very structures global health is based on. This includes, but is not limited to, ensuring research agendas are no longer set by donors from the global North. Instead, research should be driven by people from the South—the holders of knowledge of the local needs and priorities and the context that shapes their health systems.

Introduction

In poorly resourced settings within low-income and middle-income countries (LMICs) and within some high-income countries, access to high-quality evidence-informed healthcare is severely limited.1 2 In these settings, a range of contextual and behavioural barriers and enablers at the macrolevel, mesolevel and microlevel, influence our ability to deliver this care including sociocultural, socioeconomic, geographical, epidemiological, psychological, ethical, legal and political determinants.3 Overcoming these barriers and leveraging the enablers to effectively implement evidence-informed practice requires a mixed-methods, theoretically informed approach involving stakeholders from all levels of the health system, including service users. This is inherently different from the approach typically applied to clinical trials, which test the effectiveness of a novel drug or therapy.4

Implementation research is widely recognised as ‘the scientific inquiry into questions concerning implementation—the act of carrying an intervention into effect, which in health research can be policies, programmes or individual practices (collectively called interventions).5 Although the terms ‘implementation research’ and ‘implementation science’ are used interchangeably in the literature, we use this definition as an umbrella term, under which fall various theoretical approaches and methods. As an example, we situate implementation science (eg, determinant frameworks, evaluation frameworks and implementation theories) as one approach within implementation research that studies the methods for how we bring evidence-informed care to scale.

In order to ensure healthcare interventions improve health outcomes, a central concern of implementation research is to develop, test and refine the theory of how delivery of the intervention can be optimised within complex social settings. Within implementation research, theory can refer to the proposed hypothesis and/or explanation of how an intervention is expected to interact with the local context and actors to bring about change.6–8 Implementation research typically requires multidisciplinarity that draws on a range of theoretical approaches. The theoretical approaches applied to implementation research arise from different knowledge paradigms that view the world differently, such as positivism (reality is a set of observable events) or critical realism (reality exists separately from social actors) or relativism (reality is subject to different interpretations of how actors engage with one another and the context).9 10 Because of this, implementation research is faced with a particular set of challenges about how to reconcile different ways of thinking and constructing knowledge about healthcare interventions. Box 1 describes how different theoretical approaches offered by different knowledge paradigms contribute to implementation research.

Box 1

The contribution of theoretical approaches offered by the different knowledge paradigms to implementation research

Positivism: Views the world as observable events that can be measured. Positivists use deductive reasoning by primarily adapting quantitative methods to test hypotheses about a proposed intervention, that is grounded in previous research or proposed theories. Positivist approaches to implementation research address questions such as ‘what is the effectiveness of task-shifting in increasing access to psychological therapies?’, and ‘how does stigma influence the effectiveness of task-shifting in increasing coverage of psychological therapies?’.

Relativism: Views reality as subject to different interpretations, generating multiple realities that are constructed and shifting over time through the actions and interactions of different actors.9 10 Relativists use inductive approach that is not driven by a hypothesis to generate a theory about a phenomenon or to test a pre-existing theory. A relativist approach to implementation research address questions such as ‘How do social processes influence a patient’s understanding and experiences of task-shifting to improve access to psychological therapies for depression’.

Critical realism: Views reality as existing independently from social actors, stratified by social structures (eg, healthcare systems, political institutions, economies) and processes (eg, political, legal, ethical and bureaucratic policies). Critical realists use both deductive (theory testing) and inductive (theory building) approaches. By framing the social world as socially structured actions and mechanisms, critical realists produce knowledge on a range of mechanisms which mediate the relationship between cause and effect within different contexts and therefore addressing questions surrounding what works for whom and under what circumstances.9

These challenges are not unique to implementation research. Given that implementation research is about strengthening health systems to ensure access to and delivery of high-quality affordable care, it also shares close ties with Health System and Policy Research (HSPR).5 Indeed, HSPR has been defined as ‘a discipline that seeks to understand and improve how societies organise themselves in achieving collective health goals, and how different actors interact in the policy and implementation processes to contribute to policy outcome’.9 Due to the increased awareness of its importance for health system strengthening, implementation research is now a rapidly expanding discipline that tests, refines and adapts different strategies to address barriers and build on enablers to bring high-quality evidence-informed practice to scale.5

Whereas public health in LMICs addresses issues relevant to a particular community or country,11 the focus of global health is the quest for equity, including within high-income countries as well as within LMICs.12 As such, global health is defined by problems of delivering high-quality evidence informed care in practice, especially to the most vulnerable. Like HSPR, implementation research is a highly interdisciplinary and multidisciplinary field that address delivery problems in global health.12

Current state of implementation research in global health

Despite the evidence that highlights the need for implementation research to achieve the United Nations Sustainable Development Goals,13 a review addressing how these approaches and methods are being applied in LMICs suggests that researchers often do not incorporate core implementation components in their work.14 As an example, the review found that only a very small number of articles made use of implementation models or theories. The review also found that of the studies that reported their research as implementation research, only 52% (n=415) described contextual determinants and fewer than 5% addressed objectives focused on scaling up (n=32) or sustainability (n=25). Box 2 describes characteristics of high-quality implementation research.

Box 2

Key principles and concepts to support high-quality implementation research in low-income and middle-income countries (LMICs)

Capacity building: The need to build capacity to conduct high-quality implementation research in global health is greatest in resource-poor contexts within LMICs. Despite efforts to increase the numbers of researchers who are able to carry out high-quality research, numbers remain low.74

Context: Implementing evidence-informed practices needs to account for the context—any feature of the circumstances in which an intervention is conceived, developed, implemented and evaluated.75 With implementation research in LMICs, addressing ‘contextual equipoise’ is particularly relevant as it helps to ensure that implementation efforts account for the needs and priorities of the local population in addition to preventing the inappropriate use of randomised controlled trials that denies participants in the control arm access to treatments that is known to be effective within the local context.76

Evidence-informed practice: Whereas evidenced-based care is considered the gold standard for effective healthcare delivery, the application of evidence into practice is referred to as evidence-informed practice.77 This distinction is important as the application of evidence into practice encounters multiple difficulties including challenges with unstable settings with rapidly changing contexts and unintended consequences. Many healthcare practitioners therefore feel that implementing evidence-based practice, should be informed by, as opposed to based on, evidence.

Embedding research activities into existing programmes and health systems: Embedded research is carried out as an integrated and systematic part of decision making and implementation that involves the collaboration between policy-makers, implementers and communities.33 Embedding research improves ownership and, therefore, the application of the research findings.

Evaluation in real-world settings: Most implementation research in global health is conducted in resource-constrained settings. In order to bring evidence based care to scale, it is essential that implementation research is embedded in the local health system or community, allowing research to be conducted in real-world conditions, with the types of resources, incentives and operational support they would have under routine situations.14

Mixed-methods approach: Both quantitative and qualitative methods are required to understand how and why interventions work in real-world settings: Findings are then triangulated to conceptualise and confirm how the implementation of evidence-informed practice led to the measured impact.14 78

Multistakeholder involvement and engagement: Collaboration and partnerships among multiple stakeholders (such as academics, implementers, users, advocates, policy-makers and donors) across various influence domains (research, programme, policy and funding) is important for any implementation research enterprise to achieve large-scale impact.14 Ensuring involvement of stakeholders who can potentially influence implementation efforts can also help to ensure the intervention is meeting the needs and proprieties of the population it is intended on serving.

Scale-up: many effective treatments are never brought to scale which is often referred to as the ‘delivery gap’.79 Applying robust implementation research, can help support scale-up.80

Systems-level approach: a systems-level approach to the design and evaluation of interventions, views a complex intervention as a system in itself, interacting with other building blocks of the underlying health system in which the intervention embeds itself, setting off reactions that may well be unexpected or unpredictable.81

Sustainability: is the extent to which a newly implemented treatment is maintained or institutionalised within a service setting. With implementation research in LMICs, sustainability is essential as it is unethical to implement evidence-informed care that has effectively improved health outcomes in other settings, only to withdraw this treatment if it is not sustainable.82

Theory-driven research: A theory can be defined as a set of analytical principles or statements designed to structure our observation, understanding and explanation of the world.35 Within the field of implementation science, this research uses theory to develop a set of propositions or hypotheses about how implementation phenomena might unfold, which are subsequently testes through collection of empirical observations. Arguably, all attempts to improve healthcare and its outcomes are driven by theory83; theory-driven research makes such theories explicit and allows them to be tested in practice, so that an evidence-based accumulates gradually around implementation phenomena.

Unintended consequences: With implementation research there are often outcomes that are not anticipated that can be positive or negative.84 It is important to be mindful of, and explore whether any unintended or unanticipated consequences occur as a result of implementation efforts. Research projects should be designed to allow for the identification and effective management of unintended consequences.85

Health systems strengthening (HSS): involves comprehensive changes to policies and regulations, organisational structures, and relationships across the health system building blocks (eg, service delivery, health workforce, health information systems, access to essential medicines, financing and leadership/governance) that motivate changes in behaviour, and/or allow more effective use of resources across multiple care platforms.81 86 Implementation research, which applies a multidisciplinary approach to understand which interventions and implementation strategies work for whom, and how, can be usefully applied to HSS by identifying and addressing barriers and opportunities to the delivery of high-quality quality care and testing potential solutions.5

Reasons for the above weaknesses are complex and stem beyond the novelty of some of the implementation science developments or the multidisciplinary nature of the work. Much of global health research is driven by actors in the North (ie, high-income country institutions), who decide not only what needs to be investigated or funding available, but also the methods and approaches to investigate the topic of interest.15 Donors from the North also tend to favour short-term research programmes with measurable impact whereas implementation research and health systems strengthening require longer term investments with more uncertain outcomes, and genuine engagement with users, stakeholders and adopters of research outcomes, which is resource-intensive and time-consuming.15 Decolonising global health, that ‘involves removing all forms of supremacy within all spaces of global health practice, within countries, between countries and at a global level’ is key to improving the quality and relevance of implementation research.16 Although not formally defined, decolonising global health can be viewed as ‘a movement that fights against ingrained systems of dominance and power in the work to improve the health of populations’.17 In part this can be achieved by ensuring actors in the South (ie, LMIC institutions) are the ones who take ownership of and drive implementation research within their respective contexts.16

Addressing the need for high-quality implementation research

There have been useful guides and articles published on implementation research that aim to ensure rigour. The WHO guide to implementation research, published by the Alliance for Health Policy and Systems Research has the overall aim of improving capacity, particularly within LMICs.5 The guide provides a broad overview of implementation research including why it is important, appropriate methods, and explanation of relevant stakeholders.5 The Medical Research Council (MRC) of the UK has published guidance on the design and evaluation of complex interventions that has proven to be very influential.18 More recently, the MRC has released guidance on how to adapt an evidence-informed intervention to a new context.19 Although these guidelines review in detail the essential components to design and evaluate complex interventions, they offer little emphasis or direction on how different theoretical approaches and methods can be applied in practice throughout the different phases of implementation research to address specific research objectives.

This guide aims to address the gaps in the literature in order to provide an overview of the different theoretical approaches and methods to implementation research that can be applied to address specific research objectives, throughout the implementation process. Our guidance, including the cited literature, is based on our expertise in the following multicountry implementation research programmes: Programme for Improving Mental Health care (PRIME),20–23 a consortium of research institutions and ministries of health in five LMICs that implemented and expanded coverage of treatment for mental health conditions in primary care and community settings; the Emerging Mental Health Systems in LMICs (EMERALD) programme24 25; and a multicountry heAlth Systems StrEngThening programme guided by implementation research and implementation science in sub-Saharan Africa (ASSET).26 27 We also have experience with participatory research including participatory learning and action (PLA) with women’s groups to improve maternal and newborn health outcomes28 29 and human-centred design to improve perinatal mental health outcomes.30 We aim to use our experience to offer a relatively brief, user-focused guide to help global health actors recognise the important contributions arising from the different theoretical approaches and other methods to implement and report on evaluation of evidence-based and scalable interventions, programmes and practices.

The rest of the paper is organised as follows: initially we offer an overview of the role of theory and describe how this is applied to the different phases of implementation research. We then provide an outline of how the different theoretical approaches and associated methods we present can be applied throughout the implementation research process. This is followed by a discussion of how other methods, such as engagement with stakeholders, need to be embedded within implementation research. We provide case examples for illustration of the above aspects of implementation research that are based on research we have conducted within PRIME, EMERALD and ASSET as well as other research programmes we have been involved in.

Overview of implementation research

The role of theory applied to implementation research

Within implementation research, the question(s) being investigated will determine the theoretical approach(es) taken and how such investigations are carried out. This applies from the preimplementation phase, which focuses on stakeholder engagement and participatory methods to understand the context where an intervention will be introduced, to the codesign of interventions and implementation strategies, through to approaches adopted to evaluate implementation. This approach can help to ensure both sustainability and transferability.

Implementation research relies heavily on social science theories driven by the relativist paradigm of knowledge. Social science theories (eg, grand theories, such as Marxism)31 provide abstract conceptualisations of the social world, that explain the causal relationships between a phenomenon and an outcome.32 Social science theories can also be used to guide not only the study design, but also the analysis and understanding of the findings. Box 3 describes different theoretical approaches and other methods relevant to implementation research. Box 4 describes methods and tools that complement the theoretical approaches used within implementation research.

Box 3

Theoretical approaches to implementation research

Implementation science

Implementation determinant frameworks: The compilation of contextual barriers/enablers that are known to influence the ability to effectively implement the evidence-informed practice. Researchers use these frameworks to help identify determinants relevant to their implementation problem and the context within they work.35

Implementation theories: A set of analytical principles or statements designed to structure our observation, understanding and explanation of the world. Implementation theories can be used to identify enablers/barriers to implementation as well as the mechanisms by which these operate.35

Implementation process models: Describe the process of translating research into practice through different phases of research.35

Social science theories: Drawing from the relativist knowledge paradigm, social science theories provide abstract conceptualisation of the social world that explain causal relationships between a phenomenon and an outcome.32

Realist evaluation: A way of connecting high‐level social theory with empirically observable patterns. ‘Middle-range theories’ derived from this approach are useful in addressing complexity, including a realist evaluation that accounts for how context influences the underlying mechanisms by which implementation strategies achieve the outcomes.84

Programme theory: Describes how a specific intervention is expected to lead to its effects and under what conditions.87

Participatory methods: defined as the process of producing new knowledge by ‘systematic inquiry, with the collaboration of those affected by the issue being studied, for the purposes of education and taking action or effecting social change’.53

Box 4

Methodological approaches and tools for implementation research

Implementation science

Implementation strategies: Methods or techniques used to enhance the adoption, implementation and sustainability of a clinical programme or practice.4 Implementation strategies are selected to overcome identified contextual barriers. Other terminologies to describe these methods include health system strengthening interventions and quality improvement strategies.

Implementation evaluation frameworks: Specify implementation outcomes that can be evaluated to determine implementation effectiveness.35

Implementation outcomes: Defined as ‘the effects of deliberate and purposive actions to implement new treatment practices, and services and are distinct from service and patient outcomes’.40 Implementation research uses implementation outcomes (eg, acceptability, fidelity, appropriateness) to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes. Implementation strategies should be selected to target and improve specific implementation outcomes.

Taxonomies

The Expert Recommendations for Implementing Change taxonomy: A taxonomy of implementation strategies that allows researchers to apply a common language when describing how evidence-informed interventions are being implemented. Implementation strategies are selected to overcome identified contextual and behavioural barriers.37

Effective Practice and Organisation of Care taxonomy: A taxonomy for health system strengthening interventions, that is also similar to the taxonomies for implementation and quality improvement strategies.39

Implementation-effectiveness hybrid trials

Trials that are designed to evaluate both implementation and effectiveness outcomes, in addition to the influence of context on the effectiveness of the intervention.88

Literature reviews

A review of the literature is required to identify and understand the mechanisms behind (ie, theory) how contextual barriers and enablers influence the ability to effectively deliver the evidence-informed practice.76

Reporting guidelines for implementation research

Template for Intervention Description and Replication (TIDieR) checklist: To help improve the quality of descriptions of interventions and therefore their replicability, a group of experts used the Delphi process to develop the TIDieR checklist.89 90 It is recommended that researchers use this checklist to help improve the quality and reporting of implementation research for global health.

Getting messier with TIDieR: another checklist has also been developed that helps to address gaps the TIDieR checklist for research conducted outside of trials, such as implementation research. The additional items included in this framework include factors such as how well contextual factors influenced intervention delivery.

Phases of implementation research

Ideally, implementation research involves four phases, including: (1) preimplementation; (2) piloting; (3) implementation and evaluation; and (4) postimplementation/dissemination. The core components and methods of implementation research are applied repeatedly throughout the different phases of implementation.

The preimplementation phase is a critical phase that identifies the overall aim of the study and the main theoretical approach and subsequent methodology.26 It is also useful at this stage to identify specific objectives that may indicate an additional theory-based approach is required. Key to this phase of research is engaging with stakeholders who are a part of the public health system to ensure that the research objectives address the needs and priorities of the local population. This helps to embed the research programme within the existing health system from the outset.5 33 This can help to ensure local ownership of the research and, therefore, enhance the longer-term sustainability of implementation efforts.5 33 This phase of research also requires involving stakeholders to conduct a careful assessment of the local context to understand and address contextual and behavioural barriers and/or enablers to implement evidence-informed practice.26

At the end of the preimplementation phase, a diverse group of stakeholders meet to review the findings, select an initial set of implementation strategies, and develop an initial programme theory (based on the theoretical approach taken) that details the causal processes of how the implementation strategies are expected to achieve the desired outcomes. This process can help to ensure appropriate methods are selected and relevant information is collected throughout the implementation and evaluation phase of research. Finally, conducting an evaluation of costs associated with the implementation and scale-up of the intervention at this stage is important, among other things to secure buy-in from policy-makers.34

The piloting phase implements and evaluates the set of implementation strategies selected to deliver the evidence-informed practice to ensure it is behaving as intended in a limited number of pilot sites. Specifically, the theoretical approaches adopted will be used to drive the appropriate methods to monitor the effectiveness of implementation strategies in overcoming the contextual determinants they were selected to address. Based on these findings, participatory approaches, such as involving stakeholders in a Theory of Change (ToC) workshop and/or focus group discussions, can be used to adapt the implementation strategies and associated programme theory.

After adjusting the initial programme theory, the implementation and evaluation phase begins an iterative process of evaluating and/or understanding the effectiveness of the set of selected implementation strategies and associated evidence-informed care, on relevant implementation and clinical outcomes. It is paramount at this stage to document and monitor the influence of context on the effectiveness of the implementation strategies on implementation outcomes. All of this is done ensuring the continual engagement of stakeholders and embedding the research into existing health systems, as initiated in the preimplementation phase of the research. For example, conducting regular feedback meetings with health service managers and practitioners to provide updates on implementation strategies and hear their comments on this process is vital.

Lastly, the postimplementation/dissemination phase involves consolidating the engagement activities with stakeholders and users that have been ongoing throughout the earlier stages and together implementing a knowledge exchange, engagement and dissemination plan.

Although we have recommended that implementation research involves four phases, in practice researchers may find themselves in situations where not all of these stages are feasible to design and deliver prospectively; hence some retrospective application of theories and methods (described below) may be necessary.

Theoretical and methodological approaches to implementation research

Implementation science

Implementation science uses a combination of specialist theories, models, and frameworks to evaluate the effectiveness of implementation strategies and other methods in implementing evidence-informed care on implementation outcomes.35 Implementation science also seeks to understand how context influences the effectiveness of implementation strategies on implementation outcomes and how these dynamic changes throughout the implementation process. In what follows, we describe three key theoretical approaches to implementation science.

Determinant frameworks

Implementation science determinant frameworks offer a theoretical approach to implementation research developed to identify and account for specific contextual barriers and enablers that influence the implementation of evidence-informed practice.35 Put simply, these frameworks are designed to answer the question: what determines the success or failure of an implementation effort? Many of these frameworks were designed by synthesising results from empirical studies of barriers and enablers to implementation success, while others were developed using existing frameworks and theories.35 Although an investigator may take a positivist approach to use the frameworks to identify determinants, the frameworks can also be operationalised in a way more aligned with critical realism. As an example, these frameworks can be used to theorise how the identified barriers and enablers bring about change by addressing interactions across multiple domains at microlevel, mesolevel and macrolevel.27

Applying determinant frameworks to data-collection tools (eg, focus group discussions, interviews) in the preimplementation phase of research provides an evidence-based methodology to identify and address contextual and behavioural barriers and enablers that might otherwise have been missed. Identifying determinants in this phase of research can inform the selection of implementation strategies to support delivery of the evidence-informed practice.

During the implementation and evaluation phase, determinant frameworks are also used to guide the development of data collection tools that are used to monitor and understand the influence of context on the implementation strategies in delivering the evidence-informed practice.36 Determinant frameworks are particularly useful in explaining variation in implementation outcomes across studies.36

Determinant frameworks typically address five main categories of factors that can influence implementation efforts: characteristics of the evidence-informed intervention that is being implemented (eg, adaptability and complexity); the external setting (eg, sociocultural, epidemiological and socioeconomic determinants); the internal setting (eg, healthcare facility); characteristics of the users and providers; and processes of implementation. Table 1 provides examples of how different determinant frameworks can be used to identify contextual determinants that influence the delivery of evidence-based care that are relevant to an LMIC setting. The frameworks included in table 1 address characteristics of determinants that are particularly relevant in LMICs, including the external context (eg, lack of resources), characteristics of the healthcare facilities (ability to provide people-centred care and lack of supplies to effectively implement the interventions), and characteristics of the users of the healthcare facilities (eg, lack of knowledge about where or when to seek care, lack of empowerment to seek care) as well as the providers (eg, lack of training in how to provide people-centred care).

Table 1

Examples of how implementation science determinant frameworks can be applied to identify contextual determinants that influence the implementation of evidence-based care

Implementation strategies

We have noted some confusion in the literature with the terminology used for implementation strategies, where in some instances they are referred to as ‘components of a complex intervention’. In implementation science terms, implementation strategies are separate from any health-focused intervention. Essentially, implementation strategies are a tool used within the discipline: they are the implementation approaches that deliver and improve the uptake of an evidence-informed practice.35

Implementation strategies are selected to overcome identified contextual barriers (which a determinant framework and a programme theory would have identified) to deliver the evidence-informed practices identified in the preimplementation phase of research using tools such as determinant frameworks.4 As an example, a common implementation strategy applied in implementation research for global health is task sharing using community health workers to deliver the evidence-informed care. When selecting different implementation strategies, it is useful for researchers, jointly with study stakeholders, to theorise how they expect the strategies to play out once implemented within different contexts. This theorising of context and its relationship with the intervention will be critical for informing how implementation strategies in the delivery of the evidence-informed practice are subsequently evaluated and monitored over time.

There is inconsistent labelling of implementation strategies used for implementation research.37 This has resulted in difficulty in synthesising results across studies to understand the effectiveness of specific methods in a given context.38 However, methods are available that can help with selecting and labelling relevant implementation strategies, including a tool developed by The Expert Recommendations for Implementing Change (ERIC) study.37 The ERIC tool offers a compilation of implementation strategies that are known to be effective in addressing specific contextual barriers and enablers. Further, the Effective Practice and Organisation of Care taxonomy is a compilation of health system strengthening interventions.37 39 Both taxonomies share the same objective of using a common language to label the methods (ie, implementation strategies or health system strengthening interventions) used to deliver the evidence-informed practice that will help with the generalisation of findings from one research programme to another. Table 2 describes implementation strategies commonly used for implementation research in global health.

Table 2

Examples of implementation strategies (health system strengthening interventions) for implementation research in global health

Evaluation frameworks

Evaluating the effectiveness of a novel treatment requires the selection of appropriate clinical outcomes. Similarly, implementation strategies need to be evaluated to assess their effectiveness, for which appropriate outcomes need to be selected. To achieve this, implementation outcomes are selected that are supported by a social science theory, or implementation science theories or frameworks.35 Evaluation frameworks are a tool that can be applied to assist researchers in selecting appropriate implementation outcomes to evaluate for a particular set of implementation strategies.35

Initially, determinant frameworks are used to identify barriers and enablers to implementation, followed by selecting implementation strategies to overcome identified barriers. Implementation outcomes should be selected that capture the effectiveness of implementation strategies and contextual or behavioural determinants. As an example, if there is a lack of healthcare workers to deliver a previously evaluated care intervention (determinant) then approaches such as task shifting with community healthcare workers are used to address this issue (implementation strategies—otherwise known as the ‘intervention’). A relevant implementation outcome would therefore be coverage (ie, the proportion of the population receiving care prior to implementation compared with after implementation). Proctor et al40 have published a taxonomy of implementation outcomes that offers a list of conceptually distinct outcomes for evaluation—including acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability. There are other useful evaluation frameworks available to help select implementation outcomes, including the reach, effectiveness, acceptability, implementation and maintenance framework.41

It is important to evaluate a combination of relevant implementation outcomes at multiple time points throughout the implementation process.40 It is also imperative to monitor and theorise about the influence of context on the effectiveness of the implementation strategies on implementation outcomes. Determinant frameworks can be used to guide data collection tools to assess how context influences specific implementation strategies and associated implementation outcomes. Table 3 provides examples of how implementation outcomes have been applied to implementation research in LMICs.

Table 3

Examples of how implementation outcomes have been applied to implementation research to evaluate specific implementation strategies in global health

Implementation theories

Whereas social science theories explain the causal mechanisms between certain phenomena and an outcome,32 implementation theories are a theoretical approach typically developed or adapted by researchers to specifically understand or explain certain aspects of implementation.35 Researchers select a implementation theory to analyse the mechanisms of the implementation process that can help to explain why implementation efforts are successful (or not). How a researcher views the social world shapes how the implementation theory is applied. Typically, implementation theory emerges from different knowledge paradigms, some of which may draw on wider social theories of behaviour or incorporate wider social forces.

In the preimplementation phase, implementation theories can be applied not only to identify determinants to implementation, but also to understand the mechanisms by which the implementation strategies will deliver the evidence-informed practice. Moreover, throughout the implementation and evaluation phase, a mixed-methods design, guided by an implementation theory can be used to understand barriers/enablers to implementation as well the mechanisms by which the implementation strategies work, for whom and how. An example of how an implementation science theory has been applied in practice can be found in online supplemental file A, table 1. Broad description of several implementation theories is offered by Nilsen.35

Supplemental material

Realist evaluations

Realist evaluations are theory-based evaluations based on realist philosophy42 that are gaining popularity in global health.43 44 Realist philosophy was designed to sit between the positivist and relativist approaches. Realism assumes that nothing works for everyone everywhere and that the effects of interventions are largely determined by context. Therefore, a realist evaluation can help to understand and evaluate the complexity surrounding implementation research by conceptualising what works, for whom and how.42 Evaluating this complexity is useful in developing an understanding of how an intervention can be adapted to a new context and scaled up.45

Initially, a programme theory is developed (based on previous research and knowledge) that explains how the intervention is expected to produce the intended outcomes and in what contexts this can be achieved.42 The programme theory is then revisited and modified throughout the evaluation to arrive at a final theory. This approach is otherwise known as the Context-Mechanisms-Outcomes configuration. Here, context refers to the conditions in which an intervention is introduced (sociocultural, political, socioeconomic, ethical, epidemiological) which can occur at the microlevel, mesolevel and macrolevel.46 A mechanism refers to how social actors reason and react to the available resources (ie, the intervention), to bring about change in a specific context.47 Once mechanisms are activated in a specific context, they can be identified and measured through their unexpected or expected outcomes.48 When applied to implementation research, we envisage context triggering the mechanisms due to the introduction of the implementation strategy(ies) embedded within a broader programme. An example of applying realist theory to implementation research is a study that evaluated how different contexts influenced the mechanisms responsible for divergent outcomes following the implementation of a user fee exemption policy for caesarean section at two hospitals in Benin (online supplemental file A, table 2).49

Participatory approaches within implementation research

Participatory research is underpinned by a particular view of the social world that interventions/implementation strategies are socially constructed through interactions. Indeed, participatory research draws on the paradigms of critical theory and constructivism, common to the social sciences.50 Participatory methods involve stakeholders, including research participants (ie, patients and their carers), are essential to ensure that the voices of ‘experts by experience’ are heard and to gain local buy-in and ensure acceptability and sustainability for the longer term.51 52 Participatory research has been defined as the process of producing new knowledge by ‘systematic inquiry, with the collaboration of those affected by the issue being studied, for the purposes of education and taking action or effecting social change’.53

Although there are different applications of participatory research, they all share the common objective of improving social and economic conditions to effect change and to reduce the distrust of the people being studied.53 Participatory research methods are particularly relevant to disadvantaged communities and, therefore, some communities within resource-poor settings within LMICs, which are often excluded from the planning and implementation of health interventions. The inclusion of a variety of stakeholders integral to the delivery and uptake of interventions supports the development of an intervention’s theoretical foundation unencumbered by a positivist framework.54 These methods also support self-empowerment by removing barriers and promoting environments within which communities can increase their capacity to identify and solve their own problems.55

Participatory action research

Participatory action research (PAR) is an example of such methodology that represents a broad family of research approaches that emphasise social change, transformation. It is a self-reflective process, involving both researchers and participants, which undertakes action based on the local context and aims to empower participants to improve health and reduce health inequities.50 PAR is a cyclical process where action is achieved through participants identifying a problem, collecting and analysing relevant information, developing an action and reflecting on the action. The process of PAR is expected to be empowering and lead to people having increased control over their lives and communities.50 Similar to PAR is PLA, a form of action research that is grounded in the participation of people in a local community while being facilitated through local community members instead of an external researcher. PLA enables and empowers people through problem-solving through a process of sharing, learning, action and reflection.56

Human-centred design

Human-centred design is another participatory research approach, which allows for the meaningful engagement of key stakeholders, including the intervention’s target population, in all implementation research phases. It comprises five stages of intervention development and evaluation: (1) Empathise—identification of stakeholder perceptions, needs, goals and priorities; (2) Define—agreement of a priority challenge(s) to be addressed; (3) Ideate—development of potential interventions; (4) Prototype—refinement of interventions and (5) Test—evaluation and further refinement. Although initially used within the private sector, its potential utility within the field of global health has been identified through recent research.57 The use of prototypes to refine an intervention and implementation process prior to pilot and trial evaluation is a key feature as this allows for initial ‘bottom-up’ identification of potential barriers to success and unintended consequences prior to large-scale research investment.

Participatory ToC to develop a programme theory

Participatory ToC methodology is a form of participatory research, which involves key stakeholders and aims to improve the understanding of how and why a programme works through the development of a programme theory.58 Programme theories are increasingly being used with implementation research for global health to describe how an interventions intends to bring about change and the relationships between inputs, outputs and outcomes, unintended consequences and basic assumptions.59 The expectations of how a programme or intervention might work as articulated in a programme theory can subsequently be evaluated through a study design that includes process and outcome measures (eg, a process evaluation or hybrid trial).

The ToC process can support the development of shared goals among stakeholders and promote accountability. The initial programme theory can be strengthened by incorporating mid-range theories such as a realist evaluation to help explain causal mechanisms that are particularly relevant to the intervention.60 Strengthening the programme theory can also be achieved by incorporating key implementation outcomes and contextual determinants selected from different implementation science frameworks. Indeed, there are methods available that can be used to help to merge implementation science methods into ToC workshops.61 A recommended approach to developing a ToC programme theory or map involves working with stakeholders (including people with lived experience of ill health and their carers) to reach agreement as to the intended impact of an intervention; then working backwards to determine intermediate and short-term outcomes necessary to achieve the desired impact.62 Ideally, ToC programme theories are developed and refined throughout the process of implementation.

Table 4 provides examples of participatory research. As an example, ToC workshops were used to develop a programme theory for the PRIME in Ethiopia, India, Nepal, South Africa and Uganda.21 22 This programme theory described the hypothesised causal pathway from entry into each district site to achieving changes in treatment coverage for people living with mental, neurological and substance use disorders in that district.

Table 4

Examples of participatory methods in implementation research

Engagement and knowledge exchange activities

A key outcome of implementation research is to ensure scalable, sustainable change from the original research. Ensuring implementation research is participatory, by creating opportunities involving engagement with key stakeholders, is key to achieving that change. Several other disciplines have described such processes, including ‘knowledge mobilisation’,63 the use of ‘embedded researchers’,64 ‘co-production’ methods between researchers and practitioners,65 as well as studies examining how societal impact stems from research.63 66 67 Generally these are intended to create opportunities for stakeholders to understand, adopt and sustain outcomes from research, or create opportunities for ‘productive interactions’.68 These activities can be parts of intervention design itself (ie, in codesign or coproduction activities), and at other times they are activities occurring in parallel to the research and considered to be activities which complement the research process in facilitating its adoption and scale-up. Within the context of activities that support scale-up and adoption of research to support implementation activities, there has been extensive research conducted into how to overcome barriers in bringing research evidence closer to policy-making.69 70 One approach developed to specifically address this barrier is the ‘policy lab’ approach.71 Envisaged as a process for engaging evidence and policy-making and not an isolated activity, these labs serve to build a coalition through participation of diverse communities, work on the language and presentation of evidence, and engage policy-makers early to respond when windows of opportunity for changing policy emerge.71

Further components of implementation research, beyond the scope of this guide, are specialist topic areas, such as economic evaluations, literature reviews and implementation-effectiveness hybrid study designs. We have offered an overview of these in online supplemental material B for the interested readers.

Supplemental material

Conclusions

This guide is intended to address gaps not covered by existing publications or guidance, regarding how best to set up and conduct high-quality implementation research in global health settings. Given implementation research is a relatively novel and niche field that involves expertise of complex methodologies from specialist disciplines it is unsurprising it is still a challenge to conduct and report it consistently across studies. To alleviate the ambiguity surrounding theories, methodologies and tools applied to implementation research, we have described how different knowledge paradigms, with distinctive perspectives on reality, offer contributions that are essential for high-quality implementation research. We have provided guidance through an overview of core methods and approaches offered through the divergent knowledge paradigms and how these can be applied at different phases of research. To help conceptualise how the different approaches to implementation research are applied, figure 1 depicts core components and essential methodologies that we recommend global health researchers can apply at the different phases of their research.

Figure 1

Methods and core components recommended for the different phases of implementation research. EPOC, Effective Practice and Organisation of Care; ERIC, Expert Recommendations for Implementing Change; RE-AIM, reach, effectiveness, acceptability, implementation and maintenance.

The expertise from specialist disciplines required for implementation research also emphasises the need for extensive capacity building in both high-income countries, and LMICs. Further work is also needed to ensure the approaches used for implementation research are adapted and new ones developed to suit the different contexts in LMICs and that, importantly, this is driven by actors within those countries (ie, the global South). Funders such as the National Institute of Health Research in England increasingly emphasise the importance of not only capacity building, but also community engagement and involvement as core criteria for funding.72

High-quality evidence-informed implementation research in LMICs will be key to achieving Universal Health Coverage (UHC) with high-quality care. There are multiple reasons that help to explain the lack of high-quality implementation research in LMICs. Importantly, one cannot overlook the issues with accountability, power relations and divergent interests mainly driven by the global north, strongly informed by colonialism.15 73 As an example, some donors struggle to align their approaches and priorities with LMIC needs and priorities and are more interested in funding programmes with short-term outcomes and known impacts.15 This undermines programmes such as health system strengthening and implementation research that have longer term impacts, reliant on local buy-in. Improving our ability to deliver high-quality implementation research will require more effort to decolonise global health. Of particular relevance is the understanding that implementation research must be driven by communities in the global south, and non-Western researchers, as they hold the knowledge surrounding the local context and the needs and the priorities of the population. To do this successfully, a concurrent emphasis on capacity-building in within LMIC, as discussed above, is essential with leadership of research initiated and taken more commonly by local actors.

We hope this guide can help to build capacity for global health actors in both LMIC and high-income countries. We also hope our guide can help donors understand the requirements of high-quality implementation research, which may need longer-term investments with uncertain outcomes. It is our aspiration that facilitating widespread and shared understanding of the theoretical and methodological approaches needed to conduct effective, robust implementation studies in LMICs can help bring to scale life-saving interventions and achieve UHC goals.

Data availability statement

Not applicable.

Ethics statements

Patient consent for publication

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Handling editor Seye Abimbola

  • Twitter @nadineseward, @sabziehin

  • CH, SH-K, CL, JM and TTS contributed equally.

  • Contributors NSew is the guarantor and responsible for the overall content of the paper; NSew drafted the paper NSev conceptualised the idea for the paper; NSew and NSev offered insights into implementation science. JM offered details around process evaluations and context. SH-K offered details surrounding policy research TTS and NSev provided details to participatory research. CH, NSew, TTS, RS, GT, RV, RA and CL provided details of implementation research in Global Health. JM, SH-K, CH, RS, GT and NSev reviewed several drafts of the manuscript;

  • Funding NSew, CH, Prince and NSev are funded by the National Institute of Health Research (NIHR) Global Health Research Unit on Health System Strengthening in Sub-Saharan Africa, King’s College London (GHRU 16/136/54) using UK aid from the UK Government to support global health research. Sevdalis and Thornicroft’s research is further supported by the NIHR Applied Research Collaboration South London at King’s College Hospital NHS Foundation Trust, and by the ASPIRES research programme in LMICs (Antibiotic use across Surgical Pathways-Investigating, Redesigning and Evaluating Systems), funded by the Economic and Social Research Council. Sevdalis and Thornicroft are members of King’s Improvement Science, which offers cofunding to the NIHR ARC South London and is funded by King’s Health Partners (Guy’s and St Thomas’ NHS Foundation Trust, King’s College Hospital NHS Foundation Trust, King’s College London and South London and Maudsley NHS Foundation Trust) and Guy’s and St Thomas’ Foundation. Hanlon additionally receives funding support from AMARI as part of the DELTAS Africa Initiative (DEL-15-01).

  • Disclaimer The views expressed here are not necessarily those of the NIHR or the Department of Health and Social Care, the NHS, the ESRC, AMARI or the DELTAS Africa Initiative.

  • Competing interests NSew is the director of the London Safety and Training Solutions, which offers training in patient safety, implementation solutions and human factors to healthcare organisations and the pharmaceutical industry. The other authors have no conflicts of interest to declare.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.