Discussion
Our systematic review included and characterised a total of 46 cluster randomised trials on implementation of effective healthcare practices from a range of high-income, middle-income and low-income settings. From these, we identified a list of nine components of multifaceted implementation interventions, namely leadership, barrier identification, tailoring to the context, patient involvement, communication, education, supportive supervision, provision of resources, and audit and feedback. The four components most frequently used were education, audit and feedback, leadership and provision of resources. These components were used in both statistically successful and unsuccessful trials as defined by the trial's primary outcomes. The number or combination of components of implementation was not associated with a statistically significant improvement in the primary outcome, as reported by the study. It is worth noting that statistical significance, or lack thereof, does not translate into clinical significance.63
The formal use of leadership was common to many of the strategies in the included studies. There is little argument against the need for leaders or champions as a crucial part of any complex intervention,64 or as a basic component of a functioning health system.65 A Cochrane review on local opinion leaders as an intervention found that it may have some impact on changing professional practice, but the method of delivery and its role as a stand-alone intervention are difficult to define.66 There is often poor or no reporting on how leaders are identified, recruited and supported, and limited clarity and assessment of what aspects of leadership contribute to its success. This prevents a clear articulation of exactly how leadership components should be developed and implemented. The relationship between leadership and success of implementation may be more complex: are settings that are able to successfully implement an evidence-based practice already benefiting from good leadership, rendering additional efforts during implementation unnecessary? Additionally, the lack of an explicit statement of leadership as a component of a complex intervention does not mean it was not a critical part of the intervention.
The same question applies to other components, such as the barrier identification and tailoring to the local context. A situational assessment must involve the identification of contextually specific facilitators and barriers that would need to be considered in the implementation of an intervention,67 yet the critical steps of barrier identification and tailoring to the local context were infrequently identified as implementation components in the studies in this review. It is important to note that these components, as well as others, may need to be repeatedly applied and reviewed as the implementation is rolled out and assessed. Patient involvement was not a commonly reported component of implementation in the studies included in this review, yet clearly all included studies involved patients in some way, usually as part of measurable clinical outcomes. The specifics of patient care recipient or public involvement will vary widely depending on the practice in question.68 69 In our opinion, all implementation studies should at least consider the need to engage and involve individuals (including patients, care recipients, families, consumer groups or the public) from the perspective of equity and rights, and the importance of the experience of healthcare consumers in how they perceive and use healthcare services.70
There are a multitude of methods for providing education, yet assessing the effectiveness of any of these remains problematic.71 How does one determine which part(s) of the education process (such as attendance, delivery, engagement and assessment) are the critical element(s) that leads to success of the intervention? For example, there is evidence that education delivered as an outreach visit changes professional practice, but the variation in the response (as both an effective and non-effective tool) to this intervention is difficult to explain.72 We saw a vast array of educational methods used in our review, with no clear pattern that separated the successful from the non-successful interventions. As a single intervention, targeted education of health professionals (with various delivery methods) may have a small effect on both practice and patient outcomes in some trials,73 74 but the variation in response is wide. This same phenomenon is evident in the included studies in our review; education as a component of a complex intervention was used in more than 90% of the included studies, yet only slightly more than half found a significant effect of their intervention based on the primary outcomes reported.
Supportive supervision was rarely identified as a component of implementation in the studies included in this review. This process can be considered necessary at multiple points during implementation (considering that all staff members could reasonably benefit from it), and at all times it must be done in such a way that facilitates the delivery of quality care.75 A functioning health system, including one that is adequately resourced, is clearly critical to the success of any new intervention. While the studies in this review refer to resources in the context of those necessary and new to the implementation, it is also worth considering that basic resources, in the form of human resources, financial, capital and material, are critical to the success of both health systems and implementation processes.75
Audit and feedback has been shown in a Cochrane review to improve professional practice,76 but, similar to education, there are multiple factors that have the potential to affect the size of the impact. Having leadership has been shown to be a critical factor in the success of an audit and feedback component;77 however, half of the included studies in this review that identified audit and feedback in their intervention did not specify leadership as a component. This makes it increasingly difficult to evaluate these components in isolation.
Looking at our results, it is evident that these components are not unique to any one step in the implementation process. Moreover, it is clear from this review that the components identified are not unique to a particular specialty, suggesting that these nine components are the most commonly tested in cluster randomised trials across settings and health areas while implementing an evidence-based practice. However, they are also not unique to those studies that one could classify as successful or not based on the reported primary outcomes. This remains one of the significant challenges in taking an evidence-based practice to successful implementation. How does one implement the right mix of components in the right way that an implementation is successful and a practice change occurs? At the least, we suggest that an explicit consideration of these nine components is necessary as part of a facility-based implementation activity. The limited reporting and/or assessment of the implementation process, combined with a degree of caution in assessing success given the vast differences in primary outcomes, means we were limited in being able to attribute significant outcomes (and successful implementation trials) to specific aspects of the implementation components.
In light of our findings, one of the main lessons learnt centred on the challenges of appropriately reporting and analysing the types of included studies. Implementation studies (of varying trial designs) are inconsistently reported on in the literature. This might be, in part, due to a lack of standard reporting guidelines,13 14 78 as well as due to variability and inconsistency in the use of the terms to describe implementation strategies.79 Poor reporting of the methodological process of implementation makes it difficult to assess at what point in the process certain factors were critical to success. Furthermore, various systematic reviews of implementation strategies show that these interventions have effectiveness some of the time in some settings, and not all of the time in all settings, and a clearer framework for the implementation process (and measuring its effect) would improve the robustness of the data generated and its applicability to other implementation studies.80 For example, the Medical Research Council (UK) provides researchers with a framework on developing and evaluating complex interventions,81 and one would argue that this framework should be used in combination with a standardised reporting process in implementation studies.
There exist many frameworks to consider when planning an implementation intervention.82 83 We have shown nine components of implementation that are consistent across cluster randomised trials and disciplines. While having a multifaceted intervention is not necessarily more effective than having a single component intervention,84 it is possible that some of all of these nine specific components could be integrated into the existing approaches of implementation frameworks, and this warrants further consideration.
Strengths and limitations
We undertook a broad search of trials covering all medical specialties. Whereas other reviews have embraced a more generic category of implementation strategies (such as multifaceted interventions),85 our synthesis unpacked these into their individual components.
There are limitations in the search findings given the varying inclusion of the trial design (cluster) in the title and abstract, as reported in the literature.86 Despite employing a full-text search strategy to overcome this, we might have missed relevant citations. We did not include other study designs, which might have been helpful in terms of identifying components of implementation. Moreover, the quality of implementation reporting was highly variable, limiting the analysis of the components and their combinations in detail. As our review was comprehensive in terms of the medical specialty, our analysis of the effects were limited to the primary outcomes reported by each study.