1. Morgan C Broccoli1,
  2. Rachel Moresky2,
  3. Julia Dixon3,
  4. Ivy Muya4,
  5. Cara Taubman2,5,
  6. Lee A Wallis6,
  7. Emilie J Calvello Hynes3
  8. on behalf of the AFEM Scientific Committee
    1. 1 Department of Emergency Medicine, Boston Medical Center, Boston, Massachusetts, USA
    2. 2 sidHARTe Program, Department of Population and Family Health, Mailman School of Public Health, Columbia University, New York, USA
    3. 3 Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado, USA
    4. 4 Nursing Committee Chair and Executive Committee Secretary, African Federation for Emergency Medicine, Cape Town, South Africa
    5. 5 Department of Emergency Medicine, Harlem Hospital, New York, USA
    6. 6 Division of Emergency Medicine, University of Cape Town, Cape Town, South Africa
    1. Correspondence to Dr Emilie J Calvello Hynes; emiliejbc{at}gmail.com
    • Received 15 July 2017
    • Revised 4 December 2017
    • Accepted 7 December 2017

    Abstract

    Facility-based emergency care delivery in low-income and middle- income countries is expanding rapidly, particularly in Africa. Unfortunately, these efforts rarely include measurement of the quality or the impact of care provided, which is essential for improvement of care provision. Our aim was to determine context-appropriate quality indicators that will allow uniform and objective data collection to enhance emergency care delivery throughout Africa. We undertook a multiphase expert consensus process to identify, rank and refine quality indicators. A comprehensive review of the literature identified existing indicators; those associated with a substantial burden of disease in Africa were categorised and presented to consensus conference delegates. Participants selected indicators based on inclusion criteria and priority clinical conditions. The indicators were then presented to a group of expert clinicians via on-line survey; all meeting agreements were refined in-person by a separate panel and ranked according to validity, feasibility and value. The consensus working group selected seven conditions addressing nearly 75% of mortality in the African region to prioritise during indicator development, and the final product at the end of the multiphase study was a list of 76 indicators. This comprehensive process produced a robust set of quality indicators for emergency care that are appropriate for use in the African setting. The adaptation of a standardised set of indicators will enhance the quality of care provided and allow for comparison of system strengthening efforts and resource distribution.

    • health systems
    • health systems evaluation
    • health services research

    Key questions

    What is already known about this topic?

    • The need for emergency care in low-income and middle-income countries (LMICs) has never been greater.

    • Quality assessment of emergency care delivery is essential for improvement of care provision.

    • Measurable indicators for the provision of emergency care in LMICs are lacking, and there is no formalised set of clinical quality indicators established and agreed on by providers and policy makers in these settings.

    What are the new findings?

    • This manuscript proposes a set of 76 quality indicators for emergency care provision in LMICs.

    • These indicators were selected and agreed on by expert clinicians practising emergency care in Africa using a multiphase consensus process.

    Recommendations for policy

    • These indicators will facilitate intranational and international comparison of emergency care delivery and development.

    • This comparison should lead to enhanced quality and safety of care provided.

    • Local adaptation specific to burden of disease and feasibility of measurement will be a crucial next step.

    Introduction

    In low-income and middle-income countries (LMICs), the need for quality emergency care has never been greater. It is estimated that 54% of worldwide morbidity and mortality can be attributed to emergency conditions.1–3 Emergency care systems are increasingly recognised as an essential delivery platform by which to prevent a substantial portion of death and disability.4 5 However, in many locations the delivery of coordinated, quality emergency care is still in its infancy.

    Measurable indicators for the provision of emergency care in LMICs are lacking; the quality of such care delivered at health facilities in these settings has yet to be addressed. In an effort to promote emergency care, the World Health Assembly passed resolution 60.22 in 2007 requesting that WHO ’provide support to Member States for design of quality-improvement programmes and other methods needed for competent and timely provision of essential trauma and emergency care’. This resolution further urged Member States to ’identify a core set of trauma and emergency-care services, and to develop methods for assuring and documenting that such services are provided appropriately to all who need them’.6 In support of these efforts, WHO recently established the Emergency, Trauma and Acute Care programme.7

    Facility-based emergency care delivery in LMICs is expanding rapidly, but such developments rarely include measurement of the quality or the impact of the care provided. Quality assessment of emergency care delivery is the essential foundation for improvement of care provision.8 9 There is no formalised minimum set of clinical quality indicators established and agreed on by providers and policy makers in LMICs.10 National health systems seeking to monitor performance of newly established emergency care service delivery are left without context-appropriate indicators. The lack of standardisation impedes evaluation of the impact of emergency care service delivery initiatives and comparison within and between regions.

    Well-established emergency care systems in high-income countries have quality improvement programmes that have had significant impact on standardisation of service delivery.11–14 Yet even in the most developed health systems, emergency care quality indicators have been difficult to link to patient outcomes.8 Where such indicators have been developed, they are usually not appropriate for low-income settings. The most frequently used indicators in high-income countries are dependent on time intervals that are difficult to capture, diagnostics (eg, time to ECG) and therapeutics (eg, tissue plasminogen activator for stroke) not readily available in LMICs, or robust systems that require multiple levels of coordination, resources and specialisation (eg, door to balloon time for acute myocardial infarction).9 15 In addition, the vast majority of quality indicators for emergency care are process indicators (activities and outputs), and are not proximally linked with improved clinical outcomes.10 12 16–19

    Performance metrics with substantial underlying assumptions cannot be included in isolation as a composite indicator of quality. For example, a commonly used high-income country metric is ‘door-to-doctor time’, or the time it takes from a patient’s arrival to being seen by a decision-making provider. A number of assumptions are embedded in this indicator: (1) A provider is present with the necessary training in emergency care (2) The patient’s emergent clinical syndrome is recognised (3) The patient is resuscitated according to an established standard of care based on best evidence (4) The provider arrives at a correct preliminary diagnosis and disposition. However, the reality is often drastically different; many facilities in LMICs do not have a physician present at all hours, these physicians rarely have specific training in emergency care, clinical protocols relevant to the setting are lacking, and the evidence base for interventions in such resource-constrained settings is limited. In these settings, prioritisation of such an indicator in isolation could lead to false reassurance of appropriate clinical quality while missing the substantial morbidity and mortality that occurs due to undertrained providers and lack of physical resources.

    Some LMICs have attempted to define their own quality indicators. A recent systematic review identified 34 articles using indicators to measure the quality of emergency care in resource-limited settings.10 These publications generally describe indicators assessing care delivered for one specific disease process such as asthma or trauma, rather than addressing the emergency care system as a whole. South Africa has identified performance indicators for its emergency units (EUs), however, the uptake and feasibility of these performance indicators has been extremely limited.20 In addition, South Africa is an upper-middle-income country with a developed emergency care system and thus selected indicators may not be applicable in a true low-income setting.21

    The methodology for the development of quality indicators is quite variable and not well defined.22 Consensus-based strategies including the Delphi technique have been used in the literature to produce emergency-specific indicators.23 24 The African Federation for Emergency Medicine (AFEM) hosts a series of annual consensus conferences to address current gaps impeding the development of emergency care; the 2016 conference aimed to develop a minimum set of context-appropriate clinical quality indicators for facility-based emergency care. Such indicators should be pragmatic, measurable and centred around current health priorities. This effort supports two primary ends: (1) To allow uniform measurement and objective data collection to enhance a facility’s emergency care delivery and facilitate comparison across a national health system or region, and (2) To serve as an input to a larger WHO process on the standardisation of quality indicators for emergency care in LMICs.

    We undertook a multiphase expert consensus process to identify, rank and refine quality indicators.

    Phase 1: literature review for quality indicators

    We searched peer-reviewed publications and the grey literature to determine commonly cited quality indicators for emergency care. We searched PubMed, MEDLINE, EMBASE and the Cochrane Library using the search term ((quality(Title) AND emergency(Title/Abstract)). Fifty-three articles were selected which included publications that both addressed emergency care quality indicators and enumerated a list of indicators. We also reviewed citation lists of included papers. We then further searched indicators for different specialty domains relating to emergency care, those published by professional societies in the UK, USA, Canada and Australia, and those from the Columbia University sidHARTe Programme11–14 16 (Systems Improvement at District Hospitals and Regional Training of Emergency Caredx - diagnosis (sidHARTe) Indicators for Acute Care in Low and Middle-Income Countries Toolkit, unpublished). All indicators listed were extracted from the articles and compiled.

    After literature review, 2307 indicators were returned. The initial list of 2307 indicators contained many indicator duplicates, as well as indicators that had slightly different wording but were synonymous. Duplicates were removed, synonymous indicators were represented by a single indicator and final list of indicators (137) were categorised by the authors (MCB and EJCH) into Donabedian’s categories of structure, process and outcome.25 The categorised indicators were mapped to specific components of the patient encounter as defined by the WHO Emergency Care System Framework in order to break the indicators into more manageable blocks for discussion, and provide context and organisation for consensus conference participants.26

    Phase 2: selection of clinical conditions and initial indicators

    A diverse group of 32 physicians, clinical officers, nurses and administrators from 21 countries participated in the 2016 AFEM Consensus Conference. Participants were provided with a description of the working group goals and objectives, a briefing document describing the core tenets of clinical quality indicators and a template of the indicator matrix prior to the initial encounter day. Patient safety metrics in emergency care are being addressed in a separate process by WHO, and thus were not specifically addressed during this consensus session.

    Priority clinical conditions associated with emergent presentations were identified and informed by the Global Burden of Disease project.27 Explicit principles were established for the selection of clinical conditions: the condition must occur with significant frequency, be associated with high morbidity and mortality, fall within the scope of emergency care, and with timely intervention lead to improved clinical outcomes.

    Indicators associated with these conditions were then reviewed by the group according to the inclusion criteria described in table 1. Borderline indicators were marked and noted for clarification in further rounds.

    Table 1

    Criteria for emergency care clinical quality indicators in Africa

    Through consensus, the working group identified seven clinical conditions to prioritise in their indicator development (table 2). Overall, the identified conditions addressed nearly 75% of the mortality in Africa.28 Of note, we deliberately chose not to include prehospital services: they will be the subject of a separate AFEM indicator development process.

    Table 2

    Selected emergency clinical conditions and representative disease28

    Indicators compiled from the literature review were discussed and modified based on the targeted conditions and pre-established inclusion criteria (table 1). We did not generate any de novo indicators in phase 1. The group started with a list of 137 indicators: 101 (74%) reached agreement and were included in the next phase. The working group identified that timeliness indicators must be specific to only the EU encounter. The group deemed quantitative ‘time to’ indicators could not be feasibly collected with current charting methods in Africa.

    It was decided that while critical incident rates are important for quality improvement, they are difficult to define and measure within a minimum set of clinical quality indicators. Additionally, patient perception of care was also identified as an essential quality metric, but standardisation of patient satisfaction scores across various cultures and countries within Africa was beyond the scope of the recommendations of this study.

    Phase 3: formal survey—expert clinician selection of indicators

    Conditions and quality indicators were sent to a convenience sample of 38 experts from 17 countries, with extensive experience in the provision of emergency care across Africa: as in the prior round, participants were given instructions to follow criteria established in table 1 and to reflect the conditions selected in table 2. All participants completed the survey. Respondents ranked each indicator according to a 5-point Likert Scale (ranging from ‘strongly disagree’ to ‘strongly agree’). During this phase, respondents were given the opportunity to suggest additional de novo indicators they felt appropriate. Quality indicators with greater than 70% agreement of ‘agree’ or ‘strongly agree’ were selected to proceed to the next round.

    Of the 101 indicators, 89 (88%) were selected for inclusion in the next phase.

    Phase 4: expert review and ranking

    A subsequent expert panel sampled from senior delegates at the 2016 African Conference on Emergency Medicine was convened to review and provide further refinement of the quality indicators selected by the formal survey. This diverse panel was composed of 22 clinicians practising emergency care in Africa. The panel first reviewed the indicator list, and removed those that did not meet group consensus. The panel was then asked to rank the different indicators in three domains: correspondence with level of emergency system development, feasibility of data collection and value to patients (1—lowest priority, 3—highest priority). ‘Correspondence with level of emergency system development’ was described as whether the indicator was valuable as a measure of the overall development of the emergency care system; for example, mortality in the first 24 hours from trauma might be a good indicator of how well the system is functioning whereas documentation of disposition is particular to the facility and would likely be a less reliable indicator of system development. A composite score for each indicator was generated, and they were ranked according to their importance as defined by the three domains.

    Seventy-six of the 89 indicators achieved consensus; due to time constraints, indicators deemed most important for measuring facility processes (55) were subsequently rated in three domains and a composite score used for generation of a prioritised master list (online supplementary appendix 2).

    Supplementary file 2

    Phase 5: indicator definition and refinement

    Each indicator was assigned a standard profile that included an operational definition, numerator and denominator, and reference data sources for data capture. Standard definitions for each term are included in the meta-data, which will be provided to users of the indicators. Further required detailed clarification was accomplished via expert review. The profiles for selected outcome clinical quality indicators are described in table 3. Process, time-based and structure indicators are included as online supplementary tables A–D. Summary of selected clinical conditions and their associated outcomes are available in table 4.

    Table 3

    Outcome clinical quality indicators

    Table 4

    Summary table of clinical conditions and associated indicator list

    Discussion

    The initial stages identified clinical conditions which represent much of the burden of disease amenable to emergency care (table 1); this finding independently reflects other lists of life-threatening conditions requiring emergency care that have been produced through separate processes.29–31 The final list of clinical conditions with associated indicators represents the diversity of acute presentations that are amenable to emergency care. These indicators are also translatable across a range of clinical conditions (eg, % of adult patients with SBP <90 mm Hg given intravenous fluid applies to trauma, sepsis and obstetric emergencies). This emphasises the cross-cutting efficiency of emergency care interventions in an EU (or emergency receiving areas, in the absence of a dedicated EU) within the hospital.

    This process produced a robust set of clinical quality indicators appropriate to the African context (online supplementary appendix 1). These indicators should be readily transferrable to most low-income countries with limited resources for emergency care, and may be used: for quality improvement projects within a single facility; to allow comparison and benchmarking across the facilities within or between emergency care systems; and to allow for evaluation of a targeted intervention (eg, an educational programme). In addition, these indicators intersect with current priorities in emergency care development, such as minimum data sets, clinical and operational protocols and a standardised minimum package of emergency care services.32 Some of the indicators, inevitably, have underlying assumptions which may not hold in all settings (such as those which are in part dependent on provider availability). We did not aim to produce a list of indicators which could only be used at the lowest resource level, and as such not all indicators may be applicable at all settings.

    Three notable points regarding the selection of indicators deserve discussion. First, time-bound indicators for interventions were defined as occurring during the EU encounter. This decision eliminated many commonly used indicators in high-income countries such as ‘door to balloon’ for acute myocardial infarction.33 Recording time-stamped interventions is not feasible when accounting for the reality of clinical documentation in LMICs (eg, paper-based, inconsistency, missing data).20 Thus, all time-based indicators report whether the intervention occurred during the EU length of stay, although more granular data would be preferable. Second, these indicators do not address the significant contribution that prehospital services may have on outcomes, and do not include indicators related to the interface of prehospital care with the facility (eg, amount of time a facility diverts ambulances).34 These particular indicators are under consideration in a separate project by the AFEM Scientific Committee. Lastly, this work did not capture the importance of the patient’s perception of care as essential to the quality of emergency care delivered, as little multicultural data were available to inform such a process.

    This multiphase consensus process was not intended to be systematic: we aimed to produce a pragmatic and feasible list of indicators for the African setting. It is possible that potential indicators were missed in the initial processing after the literature review, but the multiple phases gave a diverse group of expert clinicians repeated opportunity to flag any significant deficiencies, and none were noted. While expert input addressed feasibility via the indicator ranking scheme, true feasibility will only be determined by formalised studies investigating what data are currently being collected in EUs at a variety of levels across the health system.

    Limitations

    There are a number of limitations that deserve mention. There is a widespread lack of published indicators for emergency care in LMICs, especially in Africa. This may be due to the fact that emergency medicine as a specialty either does not exist or has been recently developed in most countries. In addition, in many LMICs robust emergency care systems are in their infancy, so they may not yet have agreed on national indicators for quality at facilities.

    Another limitation relates to the search strategy used to identify indicators. While we attempted to be as inclusive as possible, a formal systematic review was not employed for the purposes of this consensus process. Thus, we may have missed some possible indicators for inclusion. In addition, the authors grouped indicators by structure, process, and outcome and mapped them to the WHO emergency care system (ECS) Framework to facilitate discussion and provide a contextual reference point. These indicators were then subsequently disassociated from the framework and Donabedian categories in the final ranking. Grouping them in such a manner could have influenced the way that participants selected and subsequently ranked indicators.

    The clinical conditions prioritised in this process may not be representative of the most important clinical conditions affecting a specific region or area. The local burden of emergent conditions is still unknown in much of Africa;3 thus, Global Burden of Disease data were used as a proxy for the burden of emergency conditions to guide the process.25 Further work is necessary to define the essential emergency conditions requiring evaluation with quality metrics for a given region.

    Lastly, while the authors attempted to be as inclusive of those in emergency clinical practice in Africa as possible, convenience sampling of participants at the 2016 AFEM Consensus Conference and then the subsequently engaged expert panel could have skewed results so as not to be representative of the larger experience of emergency care on the African continent. Participants in this phase represented 21 countries but, as with any consensus process, not all views may have been included. Group discussion was elicited and facilitated but this may have led to some participants conforming with the majority view and may not represent the full spectrum of opinions.35

    Conclusion

    Expanding interest in the development of emergency care by clinicians and policy makers in resource-limited settings underscores the need to measure and improve the quality of care delivered.28 We propose a minimum set of clinical quality indicators for facility-based emergency care in Africa, and provide a common language by which to facilitate intranational and international comparison; harmonisation on reported performance indicators will allow direct comparison of development efforts and should lead to enhanced quality and safety of care provided. Local adaptation specific to burden of disease and feasibility of measurement are crucial components of operationalising quality metrics. Feasibility studies are required to test the functionality of our proposed indicators; however, these indicators bring us one step closer to measuring quality emergency service delivery in low-resourced settings.

    Acknowledgments

    The authors thank sidHARTe Program Indicators for Acute Care in Low and Middle-Income Countries Toolkit (IFACT) for their inputs, and the sidHARTe members Timothy M Tan and Stephanie J Hubbard for their contributions. The authors also thank Teri Reynolds for the extensive contributions and Jennifer Pigoga for editorial assistance.

    Footnotes

    • Contributors MCB, LAW and EJCH conceived and designed the study. All authors contributed substantially to indicator inputs and the indicator selection process. MCB, LAW and EJCH drafted the manuscript. All the authors contributed to the article’s revision.

    • Competing interests None declared.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Data sharing statement No additional data are available.

    • Collaborators Abuagla, Qais; Azaz, Akliilu; Becker, Joe; Bizanso, Mark; Brewer, Tom; Brysiewicz, Petra; Cameron, Peter; Castren, Maaret; Cattermole, Giles; Chang, Cindy; Corder, Robert; Cox, Megan; De Vries, Shaheem; DeVos, Elizabeth; Diango, Ken; Dunlop, Steve; Fraser Doh, Kiesha; Fruhan, Scott; Geduld, Heike; George, Upendo; Hangula, Rachel; Hankin-Wei, Abigail; Hardcastle, Timothy; Harrison, Hooi-Ling; Helmy, Sanna; Hollong, Bonaventure; Jaiganesh, Thiagarajan; Kalanzi, Joseph; Krym, Valerie; Lin, Janet; Loganathan, Deb; Mabula, Peter; Mbanjumucyo, Gabin; Mfinanga, Juma; Mould-Millman, Nee-Kofi; Mukuddem, Nurenesa; Muldoon, Lily; Muller, Mudenga Mutendi; Murray, Brittany; Norgang, Kathryn; Nwauwa, Nnamdi; Nyrienda, Mulinda; Ogunjumo, Daniel; O’Reilly, Gerard; Osama, Muhammed-Ali; Osei-Ampofo, Maxwell; Pakeerathan, Sivarasasingham; Phillips, Georgina; Rahman, Najeeb; Richards, David; Sawe, Hendry; Taubman, Cara; Teklu, Sisay; Tenner, Andi; Tyndall, J Adrian; Wachira, Benjamin; Walter, Darren; Walton, Lisa Moreno; Zaki, Hany. The following countries (number of participants) were represented throughout the process, some participated in more than one phase: Australia (3), Botswana (1), Cameroon (1), Canada (1), DR Congo (3), Egypt (2), Ethiopia (2), Finland (1), Ghana (2), Kenya (1), Malawi (1), Mozambique (1), Namibia (1), Nigeria (2), Rwanda (3), Somalia (1), Sierra Leone (1), South Africa (5), Sudan (2), Tanzania (4), UAE (2), Uganda (1), UK (4), USA (14).

    This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.
    17. 17.
    18. 18.
    19. 19.
    20. 20.
    21. 21.
    22. 22.
    23. 23.
    24. 24.
    25. 25.
    26. 26.
    27. 27.
    28. 28.
    29. 29.
    30. 30.
    31. 31.
    32. 32.
    33. 33.
    34. 34.
    35. 35.
    36. 36.
    37. 37.