Article Text

Download PDFPDF

Using implementation science theories and frameworks in global health
  1. Valéry Ridde1,
  2. Dennis Pérez2,
  3. Emilie Robert3
  1. 1CEPED, IRD (French Institute for Research on sustainable Development), Université de Paris, ERL INSERM SAGESUD, Paris, France
  2. 2Epidemiology Division, Pedro Kouri Tropical Medicine Institute (IPK), Havana, Cuba
  3. 3ICARES and Centre de recherche SHERPA (Institut Universitaire au regard des communautés ethnoculturelles, CIUSSS du Centre-Ouest-de-l‘Île-de-Montréal), Montreal, Quebec, Canada
  1. Correspondence to Professor Valéry Ridde; valery.ridde{at}ird.fr

Abstract

In global health, researchers and decision makers, many of whom have medical, epidemiology or biostatistics background, are increasingly interested in evaluating the implementation of health interventions. Implementation science, particularly for the study of public policies, has existed since at least the 1930s. This science makes compelling use of explicit theories and analytic frameworks that ensure research quality and rigour. Our objective is to inform researchers and decision makers who are not familiar with this research branch about these theories and analytic frameworks. We define four models of causation used in implementation science: intervention theory, frameworks, middle-range theory and grand theory. We then explain how scientists apply these models for three main implementation studies: fidelity assessment, process evaluation and complex evaluation. For each study, we provide concrete examples from research in Cuba and Africa to better understand the implementation of health interventions in global health context. Global health researchers and decision makers with a quantitative background will not become implementation scientists after reading this article. However, we believe they will be more aware of the need for rigorous implementation evaluations of global health interventions, alongside impact evaluations, and in collaboration with social scientists.

  • health systems evaluation
  • public health
  • intervention study
  • qualitative study
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary box

  • Global health researchers and decision makers tend to favour impact evaluations to the detriment of implementation evaluations of health interventions.

  • Research on global health interventions is not yet sufficiently supported by theories and analytic frameworks.

  • Theories and analytic frameworks ensure the quality and rigour of global health intervention implementation evaluations.

  • Impact evaluations should go hand in hand with implementation evaluations to understand implementation processes, causal mechanisms and contextual factors shaping outcomes of global health interventions.

Introduction

Since the publication in 2015 of the Medical Research Council framework for evaluating the processes of complex public health interventions,1 we have witnessed the increasing awareness of the importance of such evaluations among global health researchers. Our colleagues with epidemiology, statistics, demography or economics background steadily seek our advice and assistance to develop the implementation evaluation component of their intervention research. During our interdisciplinary discussions, we noticed that they rarely made use of theories or analytical frameworks, although the latter ensure research quality and rigour in implementation science (IS).

In this article, we draw from our experience to open the black box of IS for global health researchers who are unfamiliar with its theories and frameworks. We propose a reflection accessible to as many global health stakeholders as possible on how theories and analytical frameworks are used to understand global health interventions. We define a global health intervention as any action, whether local, national or international, implemented in a context where domestic resources are scarce and power issues resulting from dependence on international aid are present. We believe that this article will be useful to researchers new to this research branch and to field stakeholders who collaborate with researchers or plan such evaluations.

IS is the scientific investigation of factors associated with effective implementation,2 where the roles of context,3 4 actors, ideas, institutions and power are central to analysis.5–7 For instance, health workers’ ideas about healthcare user fees abolition influenced policy implementation in Africa.8–10 Similarly, Burkina Faso’s social context partly explains the heterogeneity of the childbirth user fee subsidy policy outcomes,11 as well as its implementation gap for reducing women’s out-of-pocket expenditures.12 Implementation is the process of putting an intervention (action/project/policy)—either evidence based or theory based—into use in a specific setting.13 Some authors have proposed 14 steps for effective implementation14 and others 23 factors that may influence it.15 The concept of implementation is now considered sufficiently mature16 to be investigated in greater depth in global health. Such studies are even more important in global health, as health interventions implemented in low-income and middle-income settings often originate from, and are funded by, stakeholders from high-income countries.17 18 This contrast results in power struggles and relationships among actors, institutions and contexts that inevitably influence the implementation of interventions.

There is now consensus that global health interventions are complex and that it is necessary to adopt methodological approaches to address this complexity. Understanding their implementation, while not easy, has become essential.19 As an example, a study in Mauritania showed no impact of an obstetrical risk insurance scheme,20 whereas the qualitative study revealed that its implementation had not been adapted to health system dysfunctions.21 Even advocates of randomised control trials (RCTs) are compelled to use qualitative methods to better understand the causal mechanisms of effective interventions.22

Analysis of the implementation of interventions originates as early as the 1930s23 and therefore largely predates the current renewed interest. The present enthusiasm for implementation has been boosted by the development of implementation research,24 which has a dedicated journal (Implementation Science25) and prompted the development of methodological guides.26 However, IS differs from implementation research, in that the latter focuses on methods for promoting the use of evidence in designing an intervention. It does not specifically aim to analyse its implementation. IS, however, is an umbrella term including the analysis of the processes of interventions (process evaluation), the analysis of the fidelity of implementation (fidelity assessment) and especially the relationships with social actors and context.1 27 IS has an instrumental objective, which is to understand the factors affecting the implementation of an intervention. IS is a research branch that mobilises both qualitative and quantitative data, for example, to measure the fidelity or acceptability of an intervention.28

Global health researchers, research funders and decision makers are increasingly interested in understanding why some interventions fail while other succeed in different contexts. There are at least three corollaries to this growing enthusiasm. First, impact evaluation researchers, who conduct efficacy studies (in controlled environments) rather than effectiveness studies (in real-life settings), often tend to quantify or measure rather than try to uncover the complexity of processes, successes or failures using qualitative or mixed methods. Most of these researchers are also not trained in other methodological approaches, particularly from the social sciences,29 and do not know the theories and analytical frameworks used in IS.30 Second, IS publications of global health interventions23 31 are still rare. There are few concrete examples and few reflective analyses of the challenges of IS in these specific contexts.31 Third, a recent review of studies from low-income and middle-income countries (LMICs) between 1998 and 2016 showed that ‘only five articles used an explicit or published (…) model or theory’.32 This scarcity inhibits the dissemination of ‘good practices’ and exposes the lack of robust studies.

The objective of this article is to raise awareness among global health researchers and decision makers about how theories and analytical frameworks can be used to make sense of health interventions and their implementation in context and conduct rigorous implementation evaluations. This article is not intended for social science experts or evaluators who used ‘theory as method’.33 It targets researchers and decision makers trained in quantitative methods who wish to deepen their understanding of global health interventions in context.

Using theory in implementation science

In global health intervention research, theory-based evaluation is frequently promoted.29 33 In the field of evaluation, theory refers to the intervention theory, that is, the description of how an intervention unfolds and brings about change and of the relationships between inputs, outputs and outcomes.34 35 In social science, theories explain, rather than describe, the causal relationships between a phenomenon and an outcome. Along with conceptual frameworks, they are used to guide the research process, especially for analysis and interpretation.36

A plethora of conceptual frameworks exist to analyse implementation.37[D]eliberately using conceptual or theoretical frameworks to deepen analysis’ is essential. However, ‘selecting an implementation framework is a challenging task’.38 The novice researcher can quickly become lost in the proliferation of existing approaches. A recent survey revealed the use of about 100 different approaches.39 Although this survey shows the abundance of opportunities, it especially underscores the challenges of selecting the appropriate framework or theory, particularly when there is no clear understanding of how they differ.39 In addition to this challenge, researchers may experience the ‘temptation (…) to try to make the data fit, thereby reducing both the analytical value and its burden’.40 They may also be lured into choosing the most fashionable theory or the most ‘off the shelf’,41 losing sight of the most relevant one.

Today, we are in the third generation of IS research, which promotes a ‘rigorous research design’.42 However, according to Saetren, ‘[w]e are not even close to a well-developed theory of policy implementation’.23 Franks and Schroeder2 confirmed that ‘[t]he theoretical base for implementation is relatively new and needs to be tested and operationalized in real-world settings’. Many researchers thus use ‘bricolages’.43 They amalgamate several theories or conceptual frameworks but rarely explain how choices were made. Moreover, several conceptual frameworks deal simultaneously with implementation (process evaluation or fidelity assessment) and outputs/outcomes of an intervention (reach, sustainability and impact), such as RE-AIM (http://www.re-aim.org) or EPIS (https://episframework.com). To summarise, there is no such thing as a miracle theory or magic bullet framework.7 44

To help disentangle the possible approaches, frameworks and theories,44–47 table 1 defines four models of causation commonly used in IS and suggests essential readings for each model. These models form a continuum on an abstraction and complexity ladder. However, they may overlap when, for example, researchers borrow concepts from middle-range or grand theories to build an intervention theory or expand a conceptual framework. Researchers may use these models for three main implementation studies: fidelity assessment, process evaluation and complex evaluation. Nilsen48 also proposed a taxonomy of theories, models, and frameworks to make sense of implementation. His taxonomy differs from ours in two aspects. First, our definition of IS encompasses all types of interventions and does not solely refer to knowledge translation interventions. Second, his taxonomy is organised according to the overarching aims of theoretical approaches, while ours uses as a starting point the consistent confusion about levels of abstraction and complexity. Recently, Kislov and colleagues25 published a commentary in Implementation Science, where they provide a similar account of three different levels of theories. Our classification of frameworks and theories is complementary because it introduces a fourth level so that frameworks, which are part of IS practice, are included. To our knowledge, it is also the first account of the application of models of causation in the context of global health. With this article, our aim is not to standardise IS research practices but rather to contribute to strengthening our global health research practices and reflexivity.

Table 1

Four models of causation

At the end of the continuum (table 1), the approaches are much more complex and call on grand theories or major social theories, such as symbolic interactionism.44 Not all social scientists agree on the existence and relevance of grand theories, which are sometimes associated with ideologies (eg, Marxism, socialism and positivism49). We nevertheless retain the term to support our pedagogical demonstration. To our knowledge, grand theories are not often used to study implementation in global health. Social scientists, however, may call on such theories, for instance, when using the concept of Foucault’s biopower to understand HIV interventions,50 or Sen’s capability theory to understand the implementation of user fees exemption interventions in Burkina Faso.51 In IS, the theory is used from the beginning of the research and supports the analysis. It may also be the research output when the aim is to refine the theory.30 52

In the remainder of the article, we explain the use of these four models for fidelity assessment, process evaluation and complex evaluation. We define these three implementation studies, explain how they relate to the four models of causation and provide illustrations from global health (table 2).

Table 2

Three main implementation studies

Fidelity assessment

Fidelity is the degree to which an intervention is implemented as intended. Ensuring fidelity increases the chance of achieving the intended effects, bearing in mind that, in real-life settings, it is inconceivable to control the factors that may influence them. A comprehensive framework for implementation fidelity involves: (1) measuring the intervention’s adherence to content, coverage, frequency and duration; (2) understanding the factors moderating the level of fidelity achieved (eg, intervention complexity, facilitation strategies, quality of delivery and participant responsiveness); and (3) identifying essential components.53 54

While high fidelity is desired, adaptation (ie, users’ modification of the original design of the intervention) is likely to occur.53 55 Moreover, certain interventions are adaptive, such that implementers are allowed, or encouraged, to make changes to the original design for better adjustment to the context, ownership and sustainability.56 However, some modifications may detract from the expected outcomes.57 Hence, it is advisable to apply this framework to analyse negative adaptations as well.56 58

At the beginning of the continuum (table 1) is the intervention theory, whose causal logic is used to guide research questions and data collection in order to understand implementation. It is a long-standing approach in the field of evaluation.34 59 Following this approach, researchers propose a model of how the intervention was planned and is supposed to work according to its designers. There are many guides and articles to support researchers in this process60 and to help them involve intervention stakeholders. The intervention theory is usually a visual representation, which comes with a narrative. It may be simple and linear or display multiple layers of causal pathways. Useful illustrations include the intervention theory of the free caesarean section policy in Benin61 or that of a WHO programme (figure 1) implemented in many countries,62 which is explained below (box 3).

Figure 1

Modelling of the intervention theory of the universal health coverage partnership. Source: ref 62.

Fidelity assessment, using the intervention theory, makes it possible to explain, along with process evaluation, the production or absence of effects. We recently used fidelity assessment to analyse a results-based financing intervention in Burkina Faso, where a process evaluation and a fidelity assessment were also conducted.63 64 Some journals require authors who submit papers on intervention evaluation to use a grid describing the intervention.65 However, they do not request a description of the intervention theory. To fully understand an intervention’s theory, fidelity assessment is a compelling initial step for grasping the complexity and opening the black box of global health interventions.

Evaluation experts have long warned against type III error; epidemiologists and statisticians are well trained to deal with a type I error (rejecting a ‘true’ null hypothesis) or a type II error (failing to reject a ‘false’ null hypothesis), which results from evaluating an intervention that has not been entirely or adequately implemented.66 This is why implementation fidelity assessment (see box 1) is essential, although still underused. Of the 90 RCTs of public health interventions in LMICs with a study protocol published in a publicly available trial registry from January 2012 to May 2016, 28% did not include any implementation fidelity assessment.67 In Burkina Faso, we carried out an impact assessment of a community control intervention against Aedes aegypti, the vector for dengue fever,68 along with an assessment of its implementation fidelity.69 Other examples include assessing the implementation fidelity of a performance-based financing intervention in Malawi70 and Burkina Faso64 and of an arctic char distribution intervention in Nunavik (Canada).71

Box 1

Fidelity assessment in Cuba

We assessed the implementation fidelity of an evidence-based empowerment strategy for Aedes aegypti control that was replicated in 16 communities in Havana, Cuba. Due to the adaptive nature of the intervention, we focused on adaptation rather than on fidelity. The intervention components and subcomponents were classified as implemented, not implemented or modified, based on qualitative process data collected by implementers. Qualitative data were transformed into quantitative data. Frequencies were tabulated for all the communities, and the mean/average was calculated for each component. Semistructured interviews were also conducted with coordinators of the intervention at different levels to identify implementation challenges. The assessment showed implementation variations according to the communities and components of the strategy. It was not possible to identify negative adaptations nor to provide a detailed account of fidelity.94.

Lessons learnt:

  • Researchers should apply a comprehensive conceptual framework for implementation fidelity to categorise adaptations.

  • They should develop a comprehensive description of the intervention making explicit its functioning principles, that is, the intervention theory.

  • They should keep the intervention theory in mind to identify adaptations that might detract from outcomes.

Process evaluation

While fidelity assessment makes it possible to document what has been done compared with what was planned, process evaluation aims to understand how the intervention unfolded and how different factors influenced its implementation. Such factors include the internal dynamics of the intervention, organisational, socioeconomic or other contextual elements and stakeholders’ behaviours.27 30

Further along the continuum in terms of abstraction (table 1), process evaluations may rely on intervention theories and descriptive frameworks that divide the implementation of interventions into different categories or constructs. An example is the Consolidated Framework for Implementation Research (CFIR), which we used in Burkina Faso (box 2). A recent systematic review showed that the CFIR is increasingly used worldwide, including in LMICs, where it was used 27 times.72 Several researchers have adapted the CFIR to fit their contexts, showing that frameworks can be adjusted according to the research needs. The CFIR was for example adapted to study acceptability of a health intervention in Zambia73 or to investigate sustainability in Ghana.74 The CFIR may be mobilised to support data collection according to a deductive approach and/or at the data analysis stage to sort out data collected according to an inductive approach.72

Box 2

CFIR process evaluation in West Africa

In 2016, we used the Consolidated Framework for Implementation Research (CFIR) in Burkina Faso. We analysed the implementation of a community intervention to combat Aedes aegypti in addition to conducting impact studies,68 a process evaluation (not yet published) and a fidelity assessment.69 This triple evaluation was guided by the intervention theory developed with stakeholders during the evaluability assessment. The 16 CFIR constructs were chosen and adapted based on the context, the nature of the intervention and the availability of data. The data collected were qualitative (focus group, interviews and documentation). Like other researchers,95 we also asked stakeholders to quantitatively assess the influence of each construct on implementation (eg, network and leaders) on a scale of −2 to +2. These scores had no statistical value but helped research participants organise their ideas better. By comparing scores among the three intervention areas, we were able to collect statements to understand the heterogeneity of implementation, as was done in Mozambique.90 This process evaluation provided a better understanding of the contextual factors that influenced implementation and also underscored the factors that contributed to the intervention’s effectiveness. The use of the CFIR was particularly beneficial in that it was complementary to the fidelity assessment. Qualitative sociologists were initially reluctant to use a framework that was too precise. However, interdisciplinary discussions showed that the CFIR could be used openly and be successfully adapted. The participatory adaptation of the CFIR opened the discussions about the challenges of the intervention. Some elements of the CFIR were less appropriate or absent in the global health context.

Lessons learnt:

  • Researchers should adapt the CFIR to context and research needs.

  • They should explain why each construct is selected or eliminated.

  • Research team should discuss and reach a consensus on the meaning of each construct, including its translation into local/national languages.

  • Researchers should translate and operationalise each construct to facilitate data collection.

  • They should remain open and attentive to the emergence of empirical data that may not be related to previously defined constructs.

  • They should consider studying contextual disparities and heterogeneity of implementation and explanatory factors.

Besides descriptive frameworks, researchers may also use conceptual or theoretical frameworks, which are analytical. Such frameworks provide causal propositions for how different factors may influence—negatively or positively—implementation and outcomes. For example, researchers mobilised the theoretical literature on the determinants of access to skilled birth attendance to investigate heterogeneous outcomes of a maternal health policy in Burkina Faso.11 First, however, they analysed the policy implementation using the intervention theory.75

Complex evaluation

Complex evaluation is about analysing implementation and provides critical evidence about the implementation process and its outcomes in relation to, and not in isolation from, other elements of the context that may influence the intervention. Complex evaluation does not assume that an intervention is complex per se. Instead, complexity refers to ‘understanding the social systems within which interventions are implemented as complex’.76

The realist approach to evaluation, which is gaining interest in global health research,77 78 falls within the realm of complex evaluation. Pawson and Tilley52 propose to disentangle the complexity inherent in social interventions by uncovering interactions among an intervention, its stakeholders (implementers or beneficiaries), the multiple layers of context (eg, social and institution) within which they interact and (un)expected outcomes. The realist approach starts from the intervention theory and moves to a middle-range theory that considers multiple contextual influences to make sense of expected and unexpected outcomes of an intervention. In global health, an example would be the middle-range theory on user fee exemption policies in Africa proposed by Robert et al.79 This middle-range theory is based on Sen’s capability approach. It also considers theories and frameworks on access to healthcare to explain why such policies may lead to heterogeneous outcomes in different places or at different times. Another realist evaluation investigated a programme supporting health policy dialogue for universal health coverage (box 3), implemented by WHO in several countries under different implementation arrangements.62

Box 3

Realist evaluation of the Universal Health Coverage Partnership (UHC-P)

Supported by several stakeholders, the Universal Health Coverage Partnership (UHC-P) is a WHO-implemented programme that supports low-income and middle-income countries in organising health policy dialogues to produce robust and evidence-informed health policies for universal health coverage (https://uhcpartnership.net). The first step in the UHC-P evaluation was to design its intervention theory, which was informed by a literature review on policy dialogue96 and several meetings and interviews with key stakeholders. This initial theory was then divided into two subtheories to expose the different support strategies (eg, financial support, ongoing or ad hoc technical support, information and data generation), related mechanisms (eg, trust, empowerment of ministries of health and mutual understanding of values) and potential contextual influences.62 These subtheories guided data collection in six African countries, where qualitative case studies were conducted on health planning policy dialogue (Togo, Cape Verde and Niger), health financing policy dialogue (Burkina Faso and Democratic Republic of Congo) and aid coordination policy dialogue (Liberia). Data analysis consisted of: (1) a descriptive analysis of UHC-P implementation barriers and facilitators and (2) a realist analysis of interactions among the UHC-P components and outcomes, highlighting explanatory mechanisms, along a chain of causal events.

Lessons learnt:

  • Researchers should inform stakeholders, especially those who design the intervention, about the nature of the research and the methodological approach, and involve them in modelling the intervention theory.

  • They should consult as much conceptual and empirical scientific literature as possible to identify potential mechanisms and contextual influences.

  • They should identify intervention barriers and facilitators as a first step to uncovering Context Mechanisms Outcome (CMO) configurations.

The realist approach belongs to theory-based evaluation. It is based on the premise that interventions are complex because they are ‘theories incarnate’.52 It postulates that an intervention is effective because activities have been set up and because they mobilise social actors, whose choices influence the life of the intervention. Social actors’ reactions, reasoning and choices are called mechanisms. Mechanisms are hidden but real and may be triggered in a given context, producing outcomes.80 A realist study will uncover patterns of regularities in the interaction of contexts, mechanisms and outcomes, called CMO configurations. A website is dedicated to the realist approach (http://www.ramesesproject.org), and reporting standards have been published.81

Conclusion

This article is an introduction to IS and three main implementation studies for global health. Our aim was not to provide an exhaustive description of all the concepts, theories and examples in this research branch. Global health researchers with a quantitative background will not become implementation scientists after reading this article. However, we believe they will be more aware of the need for rigorous implementation evaluations of global health interventions, alongside impact evaluations. We encourage policy makers and practitioners to use this article to dialogue with researchers and ensure a better use of theories and analytic frameworks to plan and conduct rigorous global health intervention implementation evaluations. We also encourage all of them to study implementation in collaboration with colleagues from social science and to conduct intervention research collectively in interdisciplinary teams. ‘(B)y learning from other researchers one increases the possibilities of creative solutions’.82 The major contribution of this article is to enlighten policy makers, practitioners and quantitative researchers about the main implementation studies and models of causation, so that they actively contribute to more robust implementation evaluations of global health interventions.

Acknowledgments

We wish to thank Lara Gautier, Annabel Desgrées du Loû and Joseph Larmarange for insightful input.

References

Footnotes

  • Handling editor Seye Abimbola

  • Twitter @ValeryRidde, @emilie_robert_

  • Contributors VR came up with the idea for the article, and then the three authors developed the content together on the basis of their collective and respective experiences.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests VR and ER have been funded as research consultants by the Department of Health Systems Governance and Financing of WHO.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement No data for this article.