Article Text
Abstract
Introduction Robust metrics for national-level preparedness are critical for assessing global resilience to epidemic and pandemic outbreaks. However, existing preparedness assessments focus primarily on public health systems or specific legislative frameworks, and do not measure other essential capacities that enable and support public health preparedness and response.
Methods We developed an Epidemic Preparedness Index (EPI) to assess national-level preparedness. The EPI is global, covering 188 countries. It consists of five subindices measuring each country’s economic resources, public health communications, infrastructure, public health systems and institutional capacity. To evaluate the construct validity of the EPI, we tested its correlation with proxy measures for preparedness and response capacity, including the timeliness of outbreak detection and reporting, as well as vaccination rates during the 2009 H1N1 influenza pandemic.
Results The most prepared countries were concentrated in Europe and North America, while the least prepared countries clustered in Central and West Africa and Southeast Asia. Better prepared countries were found to report infectious disease outbreaks more quickly and to have vaccinated a larger proportion of their population during the 2009 pandemic.
Conclusion The EPI measures a country’s capacity to detect and respond to infectious disease events. Existing tools, such as the Joint External Evaluation (JEE), have been designed to measure preparedness within a country over time. The EPI complements the JEE by providing a holistic view of preparedness and is constructed to support comparative risk assessment between countries. The index can be updated rapidly to generate global estimates of pandemic preparedness that can inform strategy and resource allocation.
- health systems
- public health
- epidemics
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Key questions
What is already known?
Estimates of epidemic preparedness drawing on Joint External Evaluation (JEE) data show that many countries are unready for a major outbreak.
However, JEE data currently cover roughly one-third of countries worldwide, leaving large gaps in information.
What are the new findings?
We developed an Epidemic Preparedness Index (EPI), which ranks 188 countries and includes health capacities as well as non-health system factors such as financing, institutional capacity and infrastructure.
Capacity to detect and respond to epidemics and pandemics is weak in West and Central Africa and Southeast Asia, regions known to have high risk for emergence of pathogens with pandemic potential.
EPI scores are correlated with proxy measures for preparedness, including the timeliness of outbreak detection, investigation and reporting, and population vaccination rates during the 2009 H1N1 influenza pandemic.
What do the new findings imply?
The overlap of areas with low preparedness and high disease emergence risk suggests that the likelihood of an isolated disease emergence event leading to an epidemic or pandemic may be higher than previously understood.
Introduction
Infectious disease epidemics and pandemics periodically threaten the health and livelihoods of people in wealthy and poor countries alike.1 2 Evidence suggests that the risk of emerging infectious diseases has increased over time due to intensification of international travel, trade and livestock husbandry, as well as increasing human population density and changing interactions between humans and wild animals.3 4 These drivers of disease emergence are likely to continue and intensify,2 and additional drivers of ecological change and disruption such as global warming are likely to further amplify disease emergence risk. Given the public health risk posed by epidemics and pandemics, it is critical to systematically assess global preparedness, and to identify regions that are not well equipped to respond to such threats to public health.
Despite significant investments in global health surveillance and capacity building, large parts of the world are unprepared to manage infectious disease threats. According to recent estimates drawing on the WHO-supported Joint External Evaluation (JEE) process, only a minority of countries for which data are available are fully compliant with the 2005 International Health Regulations (IHR), which require demonstrable capacity to mitigate public health risks.5 Such capacities matter greatly to human health. There is clear evidence that the scale and severity of the 2013–2016 West Africa Ebola epidemic was exacerbated by the weak state of health systems in West Africa, and in particular, limited local capacity for public health surveillance and outbreak response.6
National governments remain the primary actors and first line of defence in responding to high-priority infectious disease outbreaks. They are also the primary locus of capacity-building efforts aimed at improving preparedness. Improving global capacity to respond to infectious disease crises requires better data on national-level preparedness worldwide, in order to inform and calibrate both foreign and domestic investments in capacity.7 Existing frameworks for measuring preparedness, including WHO’s IHR Core Capacity Monitoring Framework and the JEE, have substantially improved our understanding of preparedness to mitigate global health threats, and have shed light on gaps in preparedness both by function and across geographies. However, these frameworks have two limitations. First, while both focus on public health competencies in great depth, they do not fully address the broader range of non-health system factors, including institutional, financial and infrastructural capacities, which are also fundamental building blocks for effective response to infectious disease epidemics. For example, the JEE includes an indicator for ‘Linking Public Health and Security Authorities’, which is important for assessing the coordination between these institutions. However, this indicator does not measure the capacity of the security authorities to perform their required functions. Given the importance of this and other enabling functions, metrics tracking these additional capacities should be incorporated into assessments of epidemic preparedness.
Second, the IHR consists primarily of self-reported data, which raises the potential for bias and inaccurate reporting. By contrast, the JEE includes a robust external peer review, but requires intensive and costly data collection and analytical efforts, limiting the speed and frequency with which it can be conducted, revised and updated. Without frequent updates, key changes in preparedness metrics might not be tracked in a timely manner, leading to potentially outdated information being used in decision-making for resource allocation.
Recent work to define metrics for comparative assessment of country-level preparedness has underscored the widely accepted need for effective tools in this space. The organisation ‘Prevent Epidemics’ has published to their website country-level assessments which draw exclusively on JEE data.8 Moore et al recently reported a disease vulnerability index that combines measures of intrinsic disease risk with measures of preparedness.9 Here we aim to measure country-level preparedness independent of intrinsic disease risk, in order to disentangle these two distinct drivers of risk and allow for their separate characterisation.
We address these gaps by developing a conceptual framework for the comparative measurement of epidemic preparedness and response capacity. We operationalise this framework through a global quantitative Epidemic Preparedness Index (EPI) measuring relative epidemic and pandemic preparedness across 188 countries.
Methods
A conceptual framework for epidemic preparedness and response
Epidemic preparedness reflects the capacity of institutions—public health authorities, health systems and emergency response bodies—to detect, report and respond to outbreaks. Government institutions must detect and assess potentially consequential outbreak events, report outbreaks and their causes to relevant national and international organisations and networks, and respond with measures to reduce the health, societal and economic impacts of outbreaks.5 7 While preparedness for public health emergencies is typically considered in terms of surveillance, response and health capacity, these functions in turn rely on a broader set of institutional, financial and infrastructural factors.6 7 Accordingly, we developed a multidisciplinary framework to holistically evaluate a broad set of capabilities, and identify five general types of capabilities that are required for effective epidemic preparedness (figure 1).
Public health infrastructure
Effective public health systems are vital for early detection, mitigation and management of infectious disease outbreaks. Early detection requires robust surveillance and effective outbreak investigation capabilities for rapidly identifying, characterising and tracking emerging infectious diseases.10 This capacity requires effective health institutions with capacity to access and monitor the entirety of the geography and population. Once an epidemic is underway, the healthcare and public health systems must be able to identify, investigate, monitor and manage abrupt surges in cases through the mobilisation of personnel and resources. Health systems must be able to manage the clinical care for infected persons, and limit further transmission in clinical facilities, and public health agencies must also be able to implement effective non-pharmaceutical measures to limit the spread of infection.11 Lastly, the health system must be able to coordinate activity with other national and international agencies.
Physical and communications infrastructure
The quality and coverage of transportation and communications infrastructure can impact the effectiveness of disease surveillance as well as the speed and quality of public health response, by enabling (or constraining) the movement of personnel, information and medical supplies.12 While transportation systems can facilitate the movement of infected persons and therefore the spread of disease, they also enable public health personnel to access, surveil and provide care for populations at risk or affected by infectious disease outbreaks. Communications infrastructure is similarly important, and the growing availability of mobile phones and internet-based reporting tools can support outbreak and diagnostic reporting, particularly where traditional surveillance systems are weak or porous.13 Other elements of critical infrastructure, notably improved water sources, are essential elements supporting the overall functionality of the health system: critical for the provision of clinical care as well as the maintenance of sanitary standards.
Institutional capacity
Properly functioning public health systems depend on more general institutional capacities such as effective systems for planning, management, resource allocation and expenditure, as well as policy formulation, coordination and implementation. These and other attributes of governance and bureaucratic structures are important determinants of whether health risks and capacity building priorities are identified and prioritised, whether appropriate plans and sufficient resources are put in place and whether inputs (including human and financial capital) are effectively translated to health system outputs. Institutional capacity is difficult and often slow to develop,14 but can be rapidly degraded by political instability and violence. Vital health systems, including surveillance, information systems, clinical referral and care, and supply chains are vulnerable to disruption by political instability, insecurity and armed conflict.15 As such, countries with stable institutional structures and security conditions will perform better in detecting and responding to outbreaks than countries with political volatility, weak public administration systems and active conflict or insecurity.
Economic resources
National preparedness to detect emerging or epidemic-prone diseases requires adequate financial resources and investment in public health systems.16 The same is true of response: a large body of evidence points to a direct link between the adequacy of health financing and key metrics associated with effective response, including the quality of clinical care and health outcomes.17 During acute public health emergencies, health ministries as well as local government units may be required to rapidly scale up surveillance and health provision activities. This can lead to rapidly mounting costs, especially for personnel and consumables such as personal protective equipment and vaccines, which can be difficult to sustain without adequate resources.
Public health communication
Risk communication plays a key role in the management of public health emergencies. Government communication efforts are critical to informing citizens about what is happening during an outbreak, sharing information on the aetiological agent and providing actionable guidance on how the public can limit exposure and mitigate risk. These activities require effective systems to identify salient information gaps (or potentially hazardous rumours and misinformation), craft and adapt messaging and rapidly disseminate it to the population.18 The dissemination of information is only a first step; risk communications must also be accepted and adopted by the public. Several factors influence public acceptance of official communications, including the population’s level of trust in authorities,19 as well as overall level of public education. Educated and literate populations are more likely to be aware of basic public health practices and risks, and to understand and respond to expert guidance for behavioural changes to limit disease risks.20
Data sources and index construction
The index that we present consists of a weighted combination of five subindices capturing the concepts outlined above. Each subindex is constructed from multiple indicators, which are weighted to reflect their estimated importance (see online supplementary information). The weighted indicators are combined in order to produce a global relative ranking of countries, on a 0–100 unitless scale. Countries were omitted from the ranking if international statistics were incomplete or were identified as having substantial measurement error (see online supplementary table 2).
Supplemental material
Data sources and standardisation
Indicator data for the EPI were derived from publicly accessible data sets produced by international organisations including the World Bank, the WHO, United Nations specialised agencies, non-governmental organisations and local administrative sources (see online supplementary table 1 for indicator list and sources). Data sets were assessed for construct validity to identify measures and proxies that appropriately capture the concept under consideration.21 Candidate data sets were then assessed according to measurement reliability, reporting recency, frequency of update and spatial coverage, and excluded if substantial gapsz or bias were detected on review.
Data were rescaled and standardised using the formula X = (Xi − X min )/(X max − X min), which recomputes each indicator such that it ranges from 0 (worst performing country globally) to 1 (best performing country globally). Each rescaled indicator can be interpreted as a relative ranking of country capacity.
Index weighting
Each indicator and subindex was weighted to reflect its relative importance in the index. Weights were derived through a multiround, anonymous, expert Delphi process. Delphi methods are commonly used in public health assessments and strategic planning, particularly for policy questions for which there is incomplete knowledge, a lack of fully objective standards to guide decision-making or disagreement among policy experts.22 23 Eleven Delphi group participants were selected. The group was chosen to reflect the varied skills and disciplinary perspectives that bear on epidemic preparedness and response, including epidemiology, clinical medicine, outbreak response, health system capacity building and statistical analysis. This approach was taken to mitigate bias arising from disciplinary training, and to provide a broad and varied knowledge base.24 (Additional information on the composition of the Delphi panel is reported in the online supplementary table 3).
In the first round, each expert was asked to weight each indicator and subindex according to its relative importance, and to additionally comment on the validity, measurement reliability and importance of each indicator and subindex. In the second round, all scores and assessments from the first round were anonymised and shared within the group, and experts could revise their weights. The anonymity of the process was designed to prevent interpersonal dynamics, disciplinary collusion or other factors from biasing the results. The indicator and subindex weights were then estimated by taking an unweighted average across responses from all experts.
Index scoring
Country EPI scores were estimated in a three-step process. First, all indicators were normalised using the formula noted above and weighted to reflect their relative importance. Second, each subindex score was estimated by taking the weighted average of all constituent indicators. Third, overall country scores were estimated by taking a weighted average of subindex scores.
To describe and empirically evaluate the EPI, all 188 countries were binned into five groups (‘EPI clusters’) by k-means clustering, based on their EPI scores. The k-means clustering algorithm binned countries by computing the distance to the cluster centroid and selecting the cluster permutation with the smallest amount of within-group variance.25 Descriptive statistics for EPI clusters were then generated.
Validating the Index against detection and response outcomes during historical outbreaks and epidemics
To assess if a country’s EPI score was correlated with evidence of epidemic preparedness and response, we evaluated the association of EPI cluster with empirical measures reflecting disease outbreak detection and response capacity for selected historical outbreaks and epidemics.
Metrics for preparedness are challenging to empirically validate, especially against epidemic impacts (eg, numbers of cases or deaths), as variation in surveillance can lead to systematic bias in observed outcomes. Epidemic severity is a function of a number of factors, including pathogen characteristics (eg, infectiousness, transmission mechanism), population size and density, and travel and social contact patterns. All else equal, countries with effective surveillance systems may experience fewer cases due to timely recognition of cases which may lead to generally more effective outbreak mitigation and response. However, countries with weak health surveillance systems may also report fewer cases due to their limited capacity to identify cases and deaths, and therefore could (incorrectly) appear to have better outcomes than countries with more developed surveillance capacity.
To mitigate against these factors, we validated the EPI against measures of system outputs and epidemic impacts for high-profile and high-impact epidemics and pandemics, which are less likely to be affected by surveillance biases, due to intensive and well-resourced efforts to estimate relevant epidemiologic measures. To measure outbreak detection and reporting, we assessed the timeliness of outbreak reporting for 854 events from WHO Disease Outbreak News (DON) reports over the period 1996–2016. Timeliness was estimated for each event by computing the gap in time between the initial event date and the date of report by the WHO DON reports. Reporting timeliness has been used as a proxy measure for surveillance and reporting capacity in prior analyses, and provides a useful summary metric of the capability of these systems.13 16 26
For outbreak response, we assessed the correlation between EPI cluster and country-level vaccination rates during the 2009 H1N1 influenza pandemic. The 2009 pandemic is a useful case to examine because the global nature of the 2009 influenza pandemic allows for the evaluation of many countries, and influenza vaccination is a critical component of influenza pandemic response, which requires a country both to possess the resources to obtain vaccines as well as the ability to distribute and administer them. In addition, countries are not penalised in this case because all countries were alerted to the pandemic at the same time and vaccines were globally available at the same time.
Results
EPI rankings and comparison to other metrics
Mean country scores for the EPI clusters spanned 25.1–88.9 (table 1), a wide range demonstrating significant global disparities in epidemic preparedness. The largest within-cluster SD was found in the least prepared EPI cluster. EPI scores are also geographically clustered (figure 2), with the highest average scores identified in the wealthiest regions of the planet: Western Europe, North America, and Australia and New Zealand. Conversely, countries with weak preparedness were found to be clustered in Western and Central Africa, Western Asia and within Southeast Asia.
We additionally compared countries’ EPI scores against two key existing metrics for infectious disease preparedness: the IHR and JEE core capacity scores (details on IHR and JEE score estimation are provided in the online supplementary information). These metrics are important points of reference, as they also measure national capacity to manage infectious disease outbreaks. However, as noted above, both metrics focus primarily on attributes of the health and emergency response systems, and do not capture other institutional, financial and infrastructural factors. While the IHR core capacity scores have been critiqued due to their reliance on country self-reporting, the JEE’s external evaluation component is designed to mitigate reporting bias.5 8 We find that the EPI correlates well with the JEE scores (0·85), while the correlation between the IHR and JEE core capacity metrics, while positive, is weaker (0.62) (see figure 3 and online supplementary table 4).
Empirical evaluation of the EPI
Detection and reporting
Countries with EPI scores indicating higher preparedness were found to report outbreaks more rapidly to international health authorities. Our analysis of WHO DON reports covering outbreak events during 1996–2016 found an association between a country’s EPI cluster and reporting timeliness (table 2). Figure 4 illustrates that more prepared EPI clusters have faster outbreak reporting, compared with less prepared EPI clusters. A multivariable Cox proportional hazards regression model, adjusting for year of report, shows that on average, when compared with the most prepared EPI cluster, reporting timeliness decreases for worse prepared EPI clusters, from a 14% decrease (HR 0.86, 95% CI 0.67 to 1.1) for EPI cluster 2 to a 47% decrease (HR 0.53, 95% CI 0.42 to 0.67) for EPI cluster 5 (table 2).
Public health response
Data on influenza vaccine dissemination and uptake during the 2009 H1N1 pandemic were identified for 86 countries.27–29 We found that countries in the most prepared EPI cluster had a mean per cent of population vaccinated of nearly 20%, while countries in the least prepared EPI cluster had a mean vaccination percentage of approximately 5% (table 3). Additionally, a linear regression model predicting the per cent of population receiving vaccination by EPI cluster as a categorical variable showed a significant difference for each pairwise comparison with the best prepared EPI cluster. The best prepared EPI cluster had the highest percentage of population that was vaccinated during the 2009 H1N1 pandemic; for each 1 unit increase of EPI cluster value (in the direction of the worst epidemic preparedness), percentage of population vaccinated decreased significantly by the following number of percentage points: 10.18 (p=0.007) in EPI cluster 2; 10.34 (p=0.008) in EPI cluster 3; 12.59 (p=0.0005) in EPI cluster 4; and 14.98 (p=0.001) in EPI cluster 5.
Discussion
We developed a conceptual framework for comparatively evaluating national-level epidemic preparedness, and we operationalised that framework through the development of a global EPI. The EPI scores were binned for further analysis using a k-means clustering algorithm, and the results show significant variation in proxy measures of outbreak outcomes and response across EPI clusters.
Low-scoring EPI countries (ie, having lower preparedness levels) are geographically concentrated in West and Central Africa, Southwest Asia and areas within Southeast Asia. These geographies are also widely considered to be at heightened risk for disease emergence, particularly from zoonotic reservoirs. This suggests a potentially dangerous mismatch between infectious disease emergence and outbreak risk, and local capacity for its detection and mitigation.30 These countries likely face elevated morbidity and mortality risk arising from infectious disease outbreaks, and weak preparedness may also increase the risk of regional or global disease spread.
A comparison of preparedness metrics found good concordance between the EPI and JEE metrics, and weaker concordance between both metrics and the IHR. The EPI correlates well with the rigorous, yet slower moving and resource-intensive peer-reviewed assessments generated by the JEE. Because the EPI can be generated based on open-source data, it may shed light on preparedness in contexts where JEE estimates are sparse or not yet available. Additionally, the EPI can be updated quickly as conditions change in a country or region of interest, for example, during episodes of political instability or the onset of armed conflict that could adversely affect public health capacity. As such, it may serve as a leading indicator for the JEE during periods of instability and change, until the more resource-intensive JEE can be conducted and updated.
We are not aware of any prior efforts to assess metrics for epidemic preparedness against empirical outcomes. This gap is notable, as empirical validation is needed to assess the reliability and validity of any such framework. The EPI was tested against multiple historical outbreaks of differing aetiology, geographic location and scale. The empirical analysis assessed the association between EPI clusters and key observable implications of the quality of national preparedness, including the timeliness of outbreak detection and reporting, and the effectiveness of outbreak response. We found that higher scoring countries had significantly faster outbreak reporting, and higher levels of vaccine deployment during the 2009 H1N1 influenza pandemic.
The work described here is subject to limitations. Due to gaps in global data, we are unable to include a metric capturing whether countries have developed an outbreak response plan for epidemic or pandemic events, and whether this plan has been practised via simulations or drills, and updated. This is an important capacity which is measured through the JEE, but there are insufficient cross-national data to include in the model that we present here. Similarly, we are unable to include data on public trust in government, which is a critical factor influencing whether risk communication campaigns are accepted and adopted by the population, as well as whether the public accepts non-pharmaceutical interventions such as measures to increase social distancing. Unfortunately, data on institutional trust are fragmented, and up-to-date, globally comparable data are unavailable. As such measures become available with appropriate temporal and spatial coverage, they should be incorporated into measures of public health preparedness.
Additionally, while the empirical analysis demonstrates that the EPI is an effective metric for country-level preparedness for epidemics, it does not consider disease-specific factors which may impact response, detection or communication efforts. We have also limited the scope of the work here to national-level preparedness and have not considered the effects of community resilience or recovery. The EPI is intended to measure national preparedness for outbreaks and by design does not consider the differential intrinsic risk (eg, likelihood of disease emergence) of infectious disease impacts carried by different countries.
Policy reviews conducted in the aftermath of recent epidemics and pandemics have consistently emphasised the importance of strengthening national-level preparedness for public health emergencies.7 31 32 The capacity of these systems is a critical determinant of whether outbreaks are quickly identified and contained before they grow and spread locally, regionally or globally. However, assessments of global infectious disease risk—and debates over resource prioritisation—have been limited by the absence of robust and reliable data on national preparedness.
The framework we present has several important advantages over existing models. First, and most significantly, the framework diverges from existing metrics for epidemic preparedness by considering a range of drivers of health capacity, including a range of critical functions that sit outside the health system, but nevertheless are critical in supporting its effective functioning. Second, it provides a holistic, globally consistent approach that allows for comparisons between countries. Third, it moves beyond country self-assessment, thereby limiting associated reporting biases. Fourth, by relying on open-source global data sets, it allows for rapid and low-cost updating, to complement the slower moving and more resource-intensive JEE assessment process.
A global, comparative metric for pandemic preparedness could support the analysis of epidemic and pandemic risk in multiple ways, including the identification of high-priority countries and regions for capacity building, resource allocation and mobilisation; monitoring and evaluation of progress in capacity building efforts; and ensuring government accountability through more rigorous monitoring. The EPI can also be used to assess epidemic preparedness where other metrics such as the JEE have not been generated, or may be superseded by rapid institutional or societal change. The EPI can also be incorporated into infectious disease models and simulations to more realistically capture the effects of country-level capacity to detect and respond to disease outbreaks.
Accurate metrics for national epidemic and pandemic preparedness are important for ensuring accountability under the IHR, and uncovering and addressing gaps in global capacity to detect and manage infectious disease hazards. We present the EPI as a complement to existing metrics for assessing preparedness: a tool to fill gaps, and to quickly update estimates of preparedness during periods of instability and change.
Acknowledgments
We thank Jeremy Alberga, Sarah Barthel, Kimberly Dodd, Jean-Paul Gonzalez, Mary Guttieri, Damien Joly, Craig Kiebler, Robert Mann, Nalini Natarajan, Karen Saylors and Brad Schneider.
References
Footnotes
Handling editor Stephanie M Topp
Contributors BO, NB and PA conceived and designed the study. BO, MG, VS and NB acquired the data. BO, MG, NM, VS and PA drafted the article and BO, MG, NM, VS, NW and PA analysed the data and revised the article. All authors approved the final version.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement All data necessary to replicate results—including data and code required to reproduce empirical analyses presented in this article—will be made available to researchers upon request.