To date more than 100 countries have carried out a Joint External Evaluation (JEE) as part of their Global Health Security programme. The JEE is a detailed effort to assess a country’s capacity to prevent, detect and respond to population health threats in 19 programmatic areas. To date no attempt has been made to determine the validity of these measures. We compare scores and commentary from the JEE in three countries to the strengths and weaknesses identified in the response to a subsequent large-scale outbreak in each of those countries. Relevant indicators were compared qualitatively, and scored as low, medium or in a high level of agreement between the JEE and the outbreak review in each of these three countries. Three reviewers independently reviewed each of the three countries. A high level of correspondence existed between score and text in the JEE and strengths and weaknesses identified in the review of an outbreak. In general, countries responded somewhat better than JEE scores indicated, but this appears to be due in part to JEE-related identification of weaknesses in that area. The improved response in large measure was due to more rapid requests for international assistance in these areas. It thus appears that even before systematic improvements are made in public health infrastructure that the JEE process may assist in improving outcomes in response to major outbreaks.
- descriptive study
- public Health
- indices of health and disease and standardisation of rates
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
After Action Reviews (AARs) are a key part of improving Global Health Security.
AARs in 3 countries closely tracked the strengths and weaknesses seen in each of the country's Joint External Evaluation.
AARs can be used to monitor progress and gaps in health security.
Structured evaluation of a country’s ability to respond to health security threats has garnered a great deal of attention and effort in the last 2 years with implementation of the Joint External Evaluation (JEE) system.1 At the time of this writing, 95 countries had engaged in the full JEE process involving a national self-study followed by a 5 day, on the ground review involving international experts.2 JEEs are intended to provide a thorough review and evaluation of a country’s capacities in 19 key areas of public health.3 The scores for each of 49 indicators in 19 domains is measured on a five-point scale combining quantitative and qualitative characteristics. The accompanying narratives summarise major strengths and limitations in each country’s public health systems, and recommendations for improvements are made.
The JEE scores are subsequently published as part of the JEE report and publicly available on several websites. Much effort has gone into developing and carrying out the JEEs; little validation of the JEE scores and recommendations are available to date.
We review a disease outbreak in each of three countries having undergone the JEE process. We compare scores and recommendations from the JEEs to conditions identified during postoutbreak reviews affecting that country’s response to the outbreak. Such a comparison provides a field-based validation in outbreak-related areas of the scores and recommendations from the JEE.
Scores and recommendations from each country’s JEE were drawn from the JEE summary documents published on the WHO’s website.4 Information on each outbreak was collected using a combination of sources and methods, including the following:
Documented first and last reports from US Centers for Disease Control and prevention (CDC)’s Global Disease Detection Operations Center;
On-line media reports, UN Situation Reports and journal articles;
Interviews with staff following each outbreak at the CDC Operations Center;
Interviews with international and national responders during the outbreaks. These responders included staff from CDC, other international agencies, and national Ministries of Health. Questions specific to that outbreak were elaborated for these interviews on the bases of the above sources. In some cases, follow-up questions were posed to these informants in an iterative process to probe further prior responses and in triangulating information from the various sources.
Finally, preliminary conclusions were shared with field and headquarter staff to further refine an understanding of the collected responses.
Correspondence between the strengths and limitations in national systems relevant to an outbreak are summarised from review of text in the JEE document. Summarised information on each outbreak was compared with the relevant country’s JEE scores and text in these topical areas. The topical areas of relevance included IHR Coordination, National Laboratory Systems, Surveillance, Public Health Workforce, Preparedness, Emergency Operations, Medical Countermeasures and Risk Communication.
A subjective assessment of similarity and difference between these two sets of information was created independently as judged by each of the three authors on a three-level scale. Each of the authors is involved in global health security work professionally and has taken part in JEEs and postoutbreak reviews, though not in the countries evaluated. The three reviewers did not consult in creating their agreement scores. High correspondence existed if both the JEE and description of an outbreak raised a common concern. For example, if both described highly effective systems for laboratory diagnosis a ‘high’ level of correspondence was recorded. If in the outbreak, instead, an inadequate response from the laboratory system was reported, ‘low’ correspondence was recorded. Similarly, if the JEE reported poor surveillance capacity, and surveillance during the outbreak was considered poor, a ‘high’ correspondence was reported.
A Kappa statistic was generated to identify how likely the level of agreement among raters could have happened by chance. The MAGREE macro in SAS was used as it is a multiple-rater kappa statistic which omits missing values.
The earliest JEE in this set was carried out in Ethiopia, during March/April 2016. The other two JEEs were carried out in June and July of 2017. The outbreak in Ethiopia was identified as beginning a little over a year after the JEE. In Madagascar and Nigeria, outbreaks began 2 and 6 months after the JEE, respectively. See table 1 and figure 1.
Acute watery diarrhea in Ethiopia
Detection and treatment
The outbreak of Acute Watery Diarrhoea (AWD) is identified as starting 1/1/2017 as an index case was not identified. The Global Disease Detection programme of CDC stopped following the outbreak as it wound down after 23/7/2017.5
A total of 39 344 clinical cases, with 801 deaths, were attributed to AWD.
Cases were reported from seven regions of the country, with the majority of cases coming from the Somali region and believed to have begun from cases that came from Somalia.
Outbreaks in neighbouring countries occurred in 2014, 2015 and 2016. The outbreak in 2016 resulted in registration of more than 20 000 cases in Ethiopia, including in the capital city.
2016 and 2017 outbreaks were exacerbated by drought, a high number of displaced people in areas with inadequate sanitation and poor food safety practices.6
Context and organisation
The outbreak was called by its causative agent—Vibrio Cholera—in South Sudan and Yemen. It was referred to as AWD in Sudan, Ethiopia and Somalia.
Leadership in the response in Ethiopia was provided by the Federal Ministry of Health, Regional Health Bureaus and the WHO.
A major activity in response was to drill bore holes and truck water, and the provision of emergency food rations to millions of people.
Thousands of national staff were deployed for water treatment activities and to staff AWD treatment centres. They were supported by dozens of international staff.
Water quality testing and chlorination was a major activity, both among national and international partners.
Treatment centres were set up in all affected states. Much activity was focused on infection prevention and control in treatment centres, social mobilisation to identify cases and get them to treatment centres, and training by case management teams for treatment centres.
Lassa fever outbreak in Nigeria
Detection and treatment
The first cases of the current outbreak were identified during mid-December of 2017.7 As of 18/3/2018 a total of 376 confirmed cases and 86 deaths were recorded. A further 1495 suspected cases were identified, 1084 of which were determined to be negative. Of note, 3675 contacts of confirmed or suspected cases were followed. Among these contacts, 59 were symptomatic but only 23 were confirmed as positive cases; 805 were still being followed at the time of publication, and a total of seven had been lost to follow-up.8
The number of new cases identified peaked in late February. By mid-March, the number of new cases declined rapidly. Nine states left the active phase of the outbreak and 38 people were still receiving treatment in six of the remaining nine ‘active’ states.
Lassa fever is endemic to Nigeria and other West African countries. Small, disseminated outbreaks are common as the vectors in the animal kingdom infect people in close contact. In 2017 there were two peaks of infection, indicating a potential for expanded transmission from animal vectors as came to occur in 2018. There were 247 cases recorded in 2016 and 85 deaths. The death rate in 2018 appears to be about 1/3rd lower, probably due to earlier treatment and better identification of cases.9
Context and organisation
Three states account for 83% of all confirmed cases. Cases were identified in 56 local areas of 19 states across the country.
The Nigerian CDC (NCDC) and WHO led response activities out of the NCDC Emergency Operations Centre in Abuja. Rapid Response Teams composed of NCDC staff, Ministry of Health staff and Field Epidemiology Training Programme residents led the response in affected states.10
Three laboratories in country confirmed infection using a PCR method. The laboratory system is supported by the Bernhard Nocht Institute for Tropical medicine in Germany.
Three hospitals provide all the in-patient care for Lassa Fever cases.
A total of 17 health workers became confirmed cases in six states. No new infections occurred among healthcare workers in later weeks.
Rapid Response Teams went to four states that bordered Benin to improve disease surveillance as nine suspected cases and several confirmed cases in Benin appear to have imported the infection from Nigeria.
Pneumonic plague in Madagascar
Detection and treatment
The Index case had become symptomatic in mid-August 2017. Travelled via taxi from central highlands through the capital city on 27/8/2017. Diagnosed first case 11/9/2017. WHO notified on 13/9/2017. Twenty-seven other cases traced to the index case.
Bubonic plague is endemic with cases reported every year. The last outbreak was 8/2016–1/2017 with around 300 cases. Pneumonic plague was reported in northern Madagascar last in 2015 with 14 cases. Seven of those were treated and four of them survived.
In 2017, a total of 402 confirmed cases and 209 deaths occurred due to plague through 27/11/2017. Some of these deaths occurred among unconfirmed cases and are thus probably not all pneumonic plague. Total of 2417 cases (including 700 with negative laboratory tests) reported; 1293 of the total are considered confirmed, probable or suspected. Of these 1854 were classified as pneumonic; others were bubonic or unclassified.11 Government of Madagascar then called the epidemic contained, while WHO said more cases could be anticipated through the April end of plague seasonal transmission .
Context and organisation
Ministry of Public Health led response, co-led by WHO, focusing mainly on case finding, diagnosis, treatment of cases and isolation. Preventive chemoprophylaxis provided to 7318 identified contacts of cases.12
Institute Pasteur de Madagascar provided all laboratory support for diagnosis and treatment. Awareness campaigns led by government throughout country.
Nine plague treatment centres and six mobile centres were established with the support of international organisations.13
Fifty-five of 114 districts reported cases. Capital city had the most cases.
Comparison of JEE reports and outbreak results
Tables 2–4 present the results of JEE and outbreak reviews in Ethiopia, Nigeria and Madagascar, respectively. In the final column of each of these tables, scores from the three raters on the level of correspondence between the JEE and outbreak is presented as low (L), medium (M), high (H) or no response (N/A).
Thirty-seven variables were compared between JEE scores and field operation levels, by three raters, among these three outbreaks. This created a total of 111 scores representing a low, medium or high level of correspondence between the JEE and the outbreak response review.
For 13 of the 37 variables, all three raters agreed that the correspondence was high. For additional 13 variables, two raters rated the correspondence a high, while one rater considered it to be only moderate. Only eight times did a rater consider the correspondence to be low, and for none of the 37 variables did more than one rater consider it low.
While 37 variables were evaluated, a reviewer occasionally chose not to respond with a ‘low, medium or high’ response. In total, 107 scores were recorded among the three reviewers. The Coefficient of Concordance produced via the MAGREE routine for a Kappa test was 0.457; this represents an F statistic of chance probability of 0.037. In simple terms, the level of agreement was high (SAS summary reference ofvarious measures of concordance are presented in https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm%23statug_freq_a0000000647.htm The SAS routine used for MAGREE is presented in http://support.sas.com/kb/25/006.html).14
The comparison did not show any consistent or dramatic conflicts between JEE and outbreak information. Thus, though interpretations may vary regarding the degree of agreement between JEE and outbreak information, JEEs overall appear to provide a very good guide to strengths and weaknesses in actual outbreaks.
The comparisons made here had two important limitations. First, response capacity at the end of the outbreak often had improved a great deal from the beginning. Thus, comparisons depend on when the comparison is made. Second, much of the action in the outbreak is local, so laboratory, social mobilisation or treatment characteristics in one area may be very different from another. JEE scores seldom took variable capacity in a country into account.
In summary, it appears that after-action reports can provide a strong check on information in the JEE and can provide a near real-time update on capacities of the public health system.
Low JEE scores existed in critical areas across all three countries with implications on the ability of those countries to detect and respond to the outbreaks. Those low scores, however, were not always critical limiting factors. In the case of Ethiopia and Nigeria, even with limited skill and personnel, large country systems were able to mobilise an adequate number of skilled personnel.
Some of the inconsistencies found between the JEE and outbreak review can be explained by particular details of the outbreak. For example, though Nigeria has a low JEE score for border health, the national EOC mobilised teams to border areas to coordinate response. Because of the needs in that outbreak, one strong area of the JEE made up for weakness in another. In each of the three countries, international staff strengthened the response in areas rated low during the JEE. The quality of outbreak response, when inconsistent with JEE scores, was generally better than that predicted by the country’s JEE.
Where internal skill, equipment, training and personnel were lacking, in each of the three outbreaks, national resources were supplemented by international resources that to a large extent made up for national limitations. The quality and variety of those international personnel and supplies, and the ability of national systems to absorb them, was described to be important in strengthening the outbreak response; this could not be captured by the JEE. It appears to the authors that the JEE strongly sensitised national authorities to their areas of weakness and to opportunities to supplement national resources with international staff and equipment.
The JEE assessment identified systems capacity at the time of evaluation. What cannot be assessed from a single outbreak event is the influence of the JEE experience over time. It appears to us that for all three countries, the JEE created a framework to understand key roles and activities needed to respond to an outbreak more effectively.
Major outbreaks following a JEE may provide an opportunity for more rapid improvements in situation awareness of systematic weaknesses to be addressed than existed prior to the JEE. A fuller evaluation of this would require comparison of the review of several outbreak after a JEE, compared with several outbreaks before the JEE.
In Nigeria, Lassa outbreaks occur every year. In the outbreak prior to the JEE, an observer noted that authorities would send epidemiologists out to collect data and as we identify problems in the response we will address them. In the post-JEE Lassa outbreak, there was a much stronger focus on technical activities needed in the areas of laboratory, surveillance, reporting, Emergency Operations Centres, Medical Countermeasures, Points of Entry, communication and biosafety. Specific weaknesses in human/animal surveillance and laboratory systems triggered discussion to involve the Ministries of Agriculture and Environment. The phrase ‘supply chain’ had become part of the vocabulary. There was discussion about weaknesses in newly established legislation. The JEE experience provided an intense context with which to focus on these issues, raising the level of understanding and discourse and creating a shared vision, which otherwise would likely have been far less.
Although the JEE raised the level of understanding for key roles and actions that shared vision can be expected to deteriorate over time as staff rotation occurs and people not involved in the country’s JEE assume relevant posts. The need to refresh the reflection that occurred during the JEE can, in part, be met by improving After Action Reviews (AARs). AARs should consider the trend among several outbreaks over time. They should move from a focus on the specifics of the current outbreak to a more general reflection on the JEE indicators and levels.
What would the response to the outbreak have looked like if the JEE had not occurred? In Nigeria, weaknesses may not have been recognised as well or as quickly. Having recognised those weaknesses, prioritisation of critical functions occurred (in the case of Nigeria) which would have been far weaker and the coordination among state and national authorities would have been far less had not the JEE sensitised a large group of people to critical functions. It is less clear that this occurred in Madagascar, where leadership authority was not clearly established, or in Ethiopia, where political considerations limited the ability of health leaders to organise and mobilise.
Several key qualities of these outbreak responses were not captured by each country’s JEE:
In federal system countries, such as Nigeria, the coordination of roles between national and state level authorities and the assessment of variable levels of capacity in various states.
Quality of AARs and their integration into International Health Regulations and interim JEE internal country assessments.
The level and timing of intersectorial participation in public health activities during an outbreak. Early large-scale mobilisation can greatly reduce transmission, and obviate the need for later panic-level participation. This goes beyond coordination with security authorities or risk communication to affected communities. It most closely tracks to the PREVENT indicator of ‘IHR Coordination, Communication, and Advocacy’, but is a key part of response that is easy to see but difficult to measure.
Opportunities to focus on these issues in JEEs, and in After Action Reviews, Simulation Exercise Evaluations and annual national IHR reporting can be used to improve post-JEE National Action Plans and outbreak response in the future.
Thanks to Anu Rajasingham, Dan Duvall, Olivier de Polain, Biodun Ogunniyi, Lucy Boulanger, Kira Coggeshall, Rossanne Philen for collaboration in collecting information.
Handling editor Seye Abimbola
Contributors RG conceived of and organised the research. MB and LNM reviewed and scored each variable used independently and reviewed and contributed to the analysis.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Disclaimer The findings and conclusions in this presentation are those of the author(s) and do not necessarily represent the views of the Centers for Disease Control and Prevention or the World health Organization.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement The data used to produce the results presented in this paper can be made available on request.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.