Article Text
Abstract
The functionality and performance of public health programmes at all levels of government play a critical role in preventing, detecting, mitigating and responding to public health threats, including infectious disease outbreaks. Multiple and concurrent outbreaks in recent years, such as COVID-19, Ebola and Zika, have highlighted the importance of documenting lessons learnt from public health responses of national and global agencies. In February 2020, the US Centers for Disease Control and Prevention (CDC) Center for Global Health (CGH) activated the Measles Incident Management System (MIMS) to accelerate the ability to detect, mitigate and respond to measles outbreaks globally and advance progress towards regional measles elimination goals. The activation was triggered by a global resurgence in reported measles cases during 2018–2019 and supported emergency response activities conducted by partner organisations and countries. MIMS leadership decided early in the response to form an evaluation team to design and implement an evaluation approach for producing real-time data to document progress of response activities and inform timely decision-making. In this manuscript, we describe how establishing an evaluation unit within MIMS, and engaging MIMS leadership and subject matter experts in the evaluation activities, was critical to monitor progress and document lessons learnt to inform decision making. We also explain the CDC’s Framework for Evaluation in Public Health Practice applied to evaluate the dynamic events throughout the MIMS response. Evaluators supporting emergency response should use a flexible framework that can be adaptable in dynamic contexts and document response activities in real-time.
- Health systems evaluation
- Immunisation
- Measles
Data availability statement
Data sharing not applicable as no data sets generated and/or analysed for this study. The authors confirm that the relevant data supporting the findings of this study are available within the article [and/or] its supplementary materials.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
SUMMARY BOX
There is limited research on outbreak response system evaluations. When evaluations of outbreak response systems have been conducted, they have rarely been iterative and often focused on narrow objectives specific to the response, which limits widespread applicability.
Conducting sound evaluations during outbreak response activations rather than at the end of the response can help guide real-time decisions and ensure that lessons learnt are integrated into routine planning and programme implementation.
Future outbreak response systems will benefit from this real-time, iterative and flexible approach to generate evidence that will inform decision-making, identify gaps and redirect actions, interventions and resources as needed.
Introduction
The COVID-19 pandemic and public health emergency of international concern (PHEIC) has required substantial public health resources globally since 2020. Additionally, six other PHEICs have occurred since 2009: Influenza A H1N1 in 2009, MERS CoV in 2012, Ebola Virus Disease from 2013 to 2016, Zika Virus in 2015 and Monkeypox in 2022.1 2 These events affected between 26 and 214 countries; however, no PHEIC disrupted global operations to the extent of COVID-19.2 Learning from the COVID-19 response, global health leaders and professionals have advocated for enhancement of emergency preparedness and response efforts to prevent or minimise the impact of future pandemics.3 Documentation and analysis of each outbreak revealed important lessons about the ways national public health systems prepared and responded to these threats. Nevertheless, there is limited research on outbreak response evaluation.4 5 After-action and programme performance reviews conducted following outbreak response activities are commonly used evaluation approaches for epidemics and humanitarian emergency responses.6–9 However, delays in dissemination of evaluation reports are barriers to applying lessons learnt in a timely manner and result in missed opportunities for guiding response and postresponse efforts.
A systematic review of outbreak responses during 2010–2019 found that for every 10 epidemics, only one response completed a corresponding evaluation.10 Another analysis of epidemic response evaluations during 2008–2019 observed that evaluation framework approaches applied were rarely iterative and were often narrowly focused, limiting their generalisability to other outbreak response evaluations.11 However, there are evaluation frameworks that respond in part to the demands of outbreak response contexts by incorporating the following elements: mixed-methods, iterative, participatory approaches, balancing speed with trustworthiness and occurring simultaneously with programme implementation. Integration of these elements as part of an evaluation strategy in the outbreak response allows evaluators to provide real-time, relevant feedback during the event to support evidence-based decision-making.8 12 Real-time evaluation refers to the application of a response evaluation approach that ‘provides practical recommendations that can improve ongoing operations in the short term’ and allows for ‘rapidly evaluating the effectiveness and impact of its operational responses to emergencies and ensure that findings are “used as an immediate catalyst for organizational and operational change”’.8 10 12 Evaluators can then systematically collect and review data as the crisis unfolds, to increase programme quality, and quickly address operational problems.
A global resurgence of measles cases was reported from 2018 to 2019, driven by large outbreaks in several countries, with all regions reporting increased measles incidence and outbreaks.13 In response, the US Centers for Disease Control and Prevention (CDC), Center for Global Health (CGH), activated the Measles Incident Management System (MIMS) from February 2020 to February 2021, with the goal of accelerating the agency’s ability to detect, mitigate and respond to global measles outbreaks. As the COVID-19 pandemic escalated, disruptions to immunisation services and outbreak response campaigns exacerbated the problem, with a growing population at high risk for measles outbreaks.14–16
The MIMS organisational structure included leadership positions, technical teams and liaison positions, as depicted in figure 1. Due to COVID-19-related travel restrictions and mitigation measures, by March 2020, MIMS had shifted to 100% remote operations. Established relationships with key immunisation partners (eg, the WHO; United Nations Children’s Fund (UNICEF); Gavi, the Vaccine Alliance (Gavi) and the Measles and Rubella Initiative (M&RI)), were leveraged to support priority countries. MIMS leadership established an evaluation team within its structure to produce real-time data and document CDC’s contributions and achievements, challenges encountered and lessons learnt. In this manuscript, we describe how establishing an evaluation unit as part of the outbreak response and engaging MIMS leadership and subject matter experts (SMEs) in evaluation activities was critical to monitor progress and document lessons learnt to inform leadership decisions. We also explain how applying the CDC’s Framework for Evaluation in Public Health Practice was a practical and flexible way to evaluate dynamic events during the MIMS response.
Application of the Centers for Disease Control and Prevention evaluation framework steps during an outbreak response
The inclusion of an evaluation team within MIMS structure throughout the response duration (February 2020–February 2021) allowed for establishment of a system to document its progress, challenges and lessons learnt. Additionally, an evaluation workgroup composed of MIMS leadership, technical team leads and the evaluation team was established to develop the evaluation framework, participate in the implementation of key evaluation activities and provide technical insights in real-time.
The evaluation team followed the CDC evaluation framework’s six steps: (1) engage stakeholders, (2) describe the programme, (3) focus the evaluation design, (4) gather credible evidence, (5) justify conclusions and (6) ensure use and share lessons learnt (figure 2). The CDC framework depicts unidirectional arrows which could be interpreted as stepwise and cyclical, and that the steps occur chronologically.17 However, our application of the framework to MIMS, though strategic, did not happen so linearly (eg, some steps were applied in a different order and/or were revisited through an iterative approach). The following sections describe the application of the six-step framework during the MIMS response.
1. Engage stakeholders
The CDC evaluation framework begins with the engagement of three groups of stakeholders: (1) those involved in programme operations, (2) those that are primary users of the evaluation and (3) those served by or affected by the programme. The evaluation team found that within the context of an outbreak when there were multiple responses happening concurrently, the need for routine engagements across these stakeholder groups and being adaptable to changing priorities was critical to achieve the evaluation’s intended objectives. Details for each stakeholder group are described below.
Those involved in programme operations, as it relates to MIMS, were the MIMS staff and evaluation workgroup. Meetings were held routinely to discuss progress of stakeholder engagements, deployer activities and ensure that the evaluation focus stayed relevant and feasible. The evaluation team worked closely with the evaluation workgroup to design a theory of change (TOC) for the response which later evolved into the MIMS logic model (figure 3). After the evaluation workgroup reached consensus on the logic model, the evaluation team developed the evaluation purpose and questions, which were revisited periodically and revised as response priorities shifted and the response implementation strategy evolved.
Those that are primary users of the evaluation included CDC Center, Division and Branch leadership, and technical team members working on measles elimination efforts. There was interest in documenting achievements to inform the eventual transition of response activities back into standard operations and established programmes. It is important to note that MIMS activities were based on existing CDC programme activities of the Measles Elimination Team within the Global Immunisation Division. The pre-existing foundation of measles work was described by the workgroup to be an advantage over other responses such as Ebola, Zika and COVID-19 that did not have an established programmatic foundation with a history of field experience and existing partner relationships.
Those served or affected by the programme included partners external to CDC (ie, Ministries of Health and multilateral health organisations) that the evaluation team engaged with in a consultative manner. MIMS principal partnerships included WHO, UNICEF, Gavi, the Bill & Melinda Gates Foundation (BMGF) and the M&RI. The MIMS infrastructure included a ‘partnership liaison’ position to ensure that response activities were coordinated among measles partners and to leverage existing resources. Together, partners developed international strategic plans, field guidance documents and a list of prioritised countries for financial and technical support.
2. Describe the programme
Initially, the evaluation team developed a TOC model based on input from the evaluation workgroup and two key documents: (1) a global strategic response plan for measles outbreaks and (2) a concept note describing CDC’s measles outbreak response activities. The MIMS TOC included three challenges that contributed to the global resurgence of measles cases: (1) stagnant routine immunisation (RI), (2) lack of universal introduction of a measles second dose in RI services and (3) delays in implementation of supplemental measles vaccination campaigns. The TOC included strategies such as rapid response to outbreaks and strengthening outbreak preparedness. Outcomes of the strategies were linked to broad goals such as improving timeliness of investigation and case management and improving policies to enable vaccination and treatment of identified at-risk populations. Using a TOC to define the problem and associated strategies allowed the evaluation team to focus on efforts within the scope of the MIMS response. The TOC was subsequently used to develop a MIMS logic model (figure 3) that included inputs, activities, outputs and outcomes and linked the response goals to a health impact.
3. Focus the evaluation
This step of the framework included determining the evaluation design, defining the evaluation questions and the intended uses and users of the results and establishing the evaluation methods.17 Early in the MIMS response, the evaluation team recognised the need for a strategy that would keep the evaluation focused on the activities and objectives as they evolved. For example, during early discussions with stakeholders regarding the scope of the evaluation activities, there was interest in describing differences between preactivation and postactivation measles outbreak response activities. The evaluation workgroup decided that because the response activities were closely aligned with programmatic activities preresponse, there was minimal added value in describing these differences. Furthermore, MIMS deployment staffing needs competed with concurrent COVID-19, Ebola and Polio outbreak responses, leading to challenges with continuity of staffing and varying lengths of deployments. The evaluation team responded by engaging newly onboarded staff about the purpose of the evaluation and the intended use of the findings. The initial set of evaluation questions, while relevant and important, were broad in scope and may not have accommodated these dynamics (Box 1).
Measles Incident Management System (MIMS) evaluation purpose and questions
Evaluation purpose: the purpose of the evaluation was to document CDC’s contributions and achievements, challenges encountered and lessons learnt.
Initial evaluation questions:
Which of CDC’s technical capabilities demonstrate a comparative advantage as it relates to prevention, detection, response and sustainability?
How did MIMS improve the quality and response of measles outbreaks?
What are the financial and staffing costs of the measles response, and how can this information inform CDC’s role in the M&RI postresponse?
What tools or innovations improved outbreak investigations during this response?
What are promising/best practices from MIMS that could inform countries’ standard operations and GID’s role after the response stands down?
Final evaluation questions:
How and to what degree did the MIMS activity implementation:
Improve the quality of measles outbreak response activities?
Strengthen the evidence-base that informs measles elimination activities?
Improve engagement between CDC and partners in measles elimination activities?
What recommendations can be derived from 2020 MIMS implementation to inform future global measles strategies and outbreak response activities?
CDC, Centers for Disease Control and Prevention; GID, Global Immunisation Division.
4. Gather credible evidence
A mixed-methods approach was used to answer the evaluation questions and included: desk reviews of key documents, key informant interviews of CDC leadership and global partners, focus group discussions with MIMS staff and a deployer survey (table 1). Most of the MIMS activities had been initiated prior to the response through an established programmatic structure; however, these activities had not been systematically monitored and evaluated.
5. Justify conclusions
Through routine meetings with the technical teams and leadership, the evaluation team validated their interpretations of data sets and identified opportunities for triangulation between data sources for further understanding. The turnover of staff had both positive and negative effects on this process. New staff brought forth a broadened understanding of the data, based on differing levels and types of technical expertise and overarching programmatic perspectives. However, they were new to the evaluation model and needed orientation to the evaluation processes, activities and timelines. There were routine discussions with MIMS leadership and technical teams about priority data elements and their associated interpretations. The evaluation team also worked with communications and policy advisors to help frame the findings for presentations and reports for relevant target audiences.
6. Ensure use and share lessons
The MIMS evaluation was an opportunity to examine and document the collective achievements and lessons learnt. Findings from the evaluation were shared with CDC leadership through presentations, meetings and conferences, internal bulletins, briefings and a final report. MIMS and CGH leadership used the results to track progress, identify priority activities and guide decision-making for eventual deactivation of the response.
A critical challenge for use and sharing of the evaluation results was that the weekly incident management meetings in the MIMS response reporting structure did not align with the amount of time needed to complete activities (several months) and reach key benchmarks. The evaluation team communicated to leadership the frequency of availability of results and outputs.
Some examples of MIMS outputs from the evaluation are described below:
Staffing and deployment tracking: Of the total number of deployers supporting the MIMS response between March and December 2020 (n=47), 70% (n=32) came from one division within CDC and over 50% of that (n=24) came from one specific branch of the division. There was an advanced level of measles and immunisation subject matter expertise within the MIMS functional roles. That is, MIMS staff were often considered experts in the field, whereas some responses rely on a varied pool of staff who might not possess such disease-specific knowledge. Dependence on a single organisational unit to sustain an activation for an entire year was exceptional.
Immunity profiles: These profiles synthesised data on a country’s susceptibility to measles by age in years to identify immunity gaps to target immunisation interventions. Profiles were generated for all 194 WHO Member States. In addition, profiles with corresponding narratives were generated for 27 countries to address specific needs: 12 immunity profiles were generated for immunisation funding applications, 5 were generated as part of the comprehensive measles risk assessments (MRAs) and 10 were generated for other reasons such as Supplemental Immunisation Activities (SIAs) impact estimation, outbreak preparedness, and strategic planning.
Comprehensive measles risk assessments: The MRAs assisted countries and partners with collating, analysing and triangulating data to identify populations at high risk for measles outbreaks. The MRA reports accounted for COVID-19 disruptions to immunisation services, its impact on vaccination coverage, estimated population immunity gaps and included evidence-based recommendations for risk mitigation activities. During the MIMS activation, the collaborative team completed comprehensive MRAs for five countries: Angola, Chad, Guinea, Kenya and South Sudan.
Enhanced immunisation activities through targeted funding: MIMS provided critical financial and operational support to ensure that high-quality immunisation activities could be conducted safely during the COVID-19 pandemic through provision of personal protective equipment and enhancement of other mitigation measures. $4.5M was mobilised to support measles vaccination campaigns in Ethiopia, Nigeria and Kenya targeting over 23 million children.
Lessons learnt
The application of the CDC evaluation framework is amenable to the dynamic contexts typical during an infectious disease outbreak response. It accommodated the MIMS response’s needs by allowing for revisiting and framing evaluation questions methodically through an iterative process, adapting the logic model, and ultimately strengthening evaluation outputs for dissemination and meaningful use. Lessons learnt from applying the CDC evaluation framework within an outbreak environment are summarised below.
Because of likely shifts in response objectives and activities, its stakeholders should frequently assess the continued relevance of the evaluation approach. Evaluators should also be prepared for changes in stakeholder groups and the need to sensitise and engage these individuals routinely. It is critical to document outputs and process elements in a context where the activities change frequently while also leveraging the contributions of partners. Evaluators should be receptive to possible shifts in perspectives, values and standards, and be flexible in changing directions with evaluation components.
Responses that are based on an existing programme should leverage established infrastructure and field expertise to inform the TOC and logic model of the response. Since response durations are uncertain, its useful to establish periodic benchmarks for data reporting, analysis and dissemination based on best estimates of the frequency and availability of evaluation results. The initial benchmarks established should be reassessed and modified as the response progresses.
Shifting priorities in an emergency response may require changes in data collection. Data collection methods and sources must be adaptive to the dynamic context. The evaluation team should work with leadership and response staff to set realistic expectations related to the completion and documentation of outputs and outcomes in a complex response environment.
It is important to present results in a manner that supports the constantly evolving and diverse stakeholder needs. An evaluation workgroup (or similar structure) with leadership and SMEs can help ensure frequent and routine opportunities for inclusion and representation of broad perspectives, expertise and standards. It is also beneficial to engage communications and policy SMEs within the organisation for assistance with synthesising and disseminating results for various target audiences and through non-traditional modes of communication.
Early and sustained engagement of stakeholders in the evaluation activities results in their buy-in and prompts use of results during and after the response. This is especially important if there is frequent turnover of response staff. Lessons learnt through evaluation during a response should ultimately guide any necessary changes to its objectives, prioritisation of response activities, identification of staffing and financial needs and response deactivation decisions and timelines. These lessons should also promote the sustainability of sound practices in non-response periods and expand knowledge within the programme to better prepare for future responses.
Conclusions
The CDC Evaluation framework that guided the MIMS response evaluation was a flexible and practical tool for planning and implementing activities in a dynamic infectious disease outbreak response context. As a result of using this framework, the MIMS evaluation generated important outputs and outcomes that informed the agency’s decision to deactivate the response and incorporate expanded efforts conducted during the response into the established programmatic team. Given the evolving nature of such a response, it is important to use realistic and flexible monitoring and evaluation methods.
Conducting a sound evaluation during a response activation can help generate evidence to inform decision-making, identify gaps and redirect public health efforts, interventions and resources. Waiting until postdeactivation of a response to measure the outputs, outcomes or impact of implemented activities can result in missed opportunities to making immediate improvements to specific strategies. Targeted data collection to address outbreak and evaluation priorities generated lessons learnt for integration into routine planning and programme implementation and future response strategies.
There is a need to document and disseminate lessons learnt from evaluations conducted during outbreak responses to strengthen the evidence-base. Evaluation findings can and should guide incident management leadership and staff in monitoring the status of response objectives and identifying evolving priorities and gaps. Our findings support the use of an adaptable approach that serves the needs of intended users and equips them with timely data to be able to make informed decisions and guide the direction of strategies during an outbreak response.
Data availability statement
Data sharing not applicable as no data sets generated and/or analysed for this study. The authors confirm that the relevant data supporting the findings of this study are available within the article [and/or] its supplementary materials.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
Acknowledgments
Thanks to the additional CDC colleagues who supported the development and implementation of the MIMS monitoring and evaluation plan: Robb Linkins, Robert Perry, David Sniadack, Paul Rota, Raydel Anderson, Pratima Raghunathan, Elizabeth O’Mara, Chelsey Austin, and Erin Palmisano.
Footnotes
Handling editor Seye Abimbola
Contributors SJ and SLY led the design of the MIMS monitoring and evaluation plan, with support from SB and DGP. SJ conceptualised the paper which was refined through multiple rounds of feedback from all authors. SJ and SB conducted the literature review and drafted the manuscript, with support from DGP. SLY provided overarching guidance to the direction and technical framing of manuscript. All coauthors contributed to the implementation of the MIMS monitoring and evaluation plan, and contributed substantially to the writing of the manuscript. All authors approved the final version of the submitted manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Disclaimer The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the US Centers for Disease Control and Prevention or any institutions they are affiliated with.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.