Article Text

Download PDFPDF

Extending the measurement of quality beyond service delivery indicators
  1. Ambrose Agweyu1,2,
  2. Theopista Masenge3,4,
  3. Deogratias Munube5,6
  1. 1Department of Epidemiology and Demography, KEMRI—Wellcome Trust Research Institute, Nairobi, Kenya
  2. 2Kenya Paediatric Association, Nairobi, Kenya
  3. 3Elizabeth Glaser Pediatric AIDS Foundation Tanzania, Dar es Salaam, United Republic of Tanzania
  4. 4Paediatric Association of Tanzania, Dar es Salaam, United Republic of Tanzania
  5. 5Department of Paediatrics and Child Health, Makerere University/Mulago National Referral Hospital, Kampala, Uganda
  6. 6Uganda Paediatric Association, Kampala, Uganda
  1. Correspondence to Professor Ambrose Agweyu; AAgweyu{at}kemri-wellcome.org

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Assessing the quality of healthcare services is a priority in low-resource and high-resource settings alike. It is, however, a complex endeavour. Outcome measures are subject to case-mix variation, often require lengthy follow-up periods to manifest, and are generally costly to monitor. Therefore, structure and process measures are routinely considered reliable alternatives under the assumption of a causal link between the provision of care and improved health status.1

In this edition of BMJ Global Health, Giorgio et al used such structure and process measures—that is, service delivery indicators (SDI)—to assess the quality of healthcare across 10 African countries.2 The SDI programme was set up to conduct cross-sectional nationally representative surveys that examine service delivery performance in education and health in Africa. The health indicators assess health worker availability, health worker knowledge on the management of common ailments, and availability of selected essential equipment and treatments. These surveys are aimed at providing high-level snapshots of the quality of health services in target countries.

In this editorial, we discuss some of the limits of using data from a platform such as the SDI programme to make sense of quality of care and highlight complementary approaches that are aligned with emergent thinking in the field.

Clinical vignettes

The SDI programme uses clinical vignettes of common conditions to assess knowledge among a randomly selected sample of health workers of various cadres ranging from doctors to community health workers. Clinical vignettes provide a convenient source of data when compared with alternative approaches such as direct clinical observation, standardised patients and audits of medical records.3 While knowledge is fundamental to the provision of care, highly standardised clinical scenarios do not adequately represent the complexity of real-world patient–provider encounters.

Furthermore, the organisation of care in a clinical department allows for consultation across providers, clustering of patients with more complex diagnoses (eg, tuberculosis) among experienced providers, and the potential for active learning and mentorship. Finally, the aggregate performance of health workers included that of doctors, community health workers, medical assistants, nurses and nurse–midwives. In many settings, the role of prescription is restricted to doctors and clinical officers. Exceptions include disease-specific programmes such as the nurse-initiated management of antiretroviral therapy.4 It is therefore inappropriate to examine the treatment practices of cadres that are not expected to prescribe treatments.

The authors have indicated that the effect of these often important deviations from the real world is that the SDI underestimate the actual quality of care provided. However, the relationship is not consistent.3

Health worker availability

Health worker availability was examined through unannounced visits to assess absenteeism, reviewing the presence of randomly preselected staff against a duty schedule at a follow-up visit a few days after the initial enumeration visit. On average, the authors estimated provider absence at 30% across the surveys. Absenteeism is a major concern that exacerbates challenges in healthcare delivery by affecting the quality of services, increasing patient waiting times and discouraging care seeking.

However, to generate appropriate interventions to address it, quantification of absenteeism should be accompanied by a characterisation of the underlying reasons. The high levels of absenteeism may be a signal for weak accountability mechanisms. Still, they may also reflect mental or physical health issues, particularly in settings faced with numerical shortages of health workers.5 Informal arrangements among staff such as trading shifts are often used as solutions or mechanisms to cope with burn-out and accommodate competing personal responsibilities.

Ignoring these underlying considerations may lead to recommendations that further compound the problem and result in demotivation, reduced productivity and heighten the existing deficiencies.

Availability of equipment and supplies

Availability of equipment and medicines was assessed by visual inspection of storage facilities and consulting rooms at the health facilities. While the SDI programme is designed to be geographically representative, the temporal variation in the availability of inputs can vary considerably, even over a short period. These variations may not be random. Hence, spot assessments of the availability of equipment and medicines may not be representative across time. The reliance on visual inspection at a single point in time can therefore result in inaccurate estimates where delivery of supplies is inconsistent, leading to potentially flawed recommendations on how to address the challenge of availability of equipment and supplies.

Using indicators to simplify complex information

Approaches for assessing quality of care continue to evolve to recognise and accommodate the multidimensional nature of clinical practice and differences across contexts. Despite the widespread use of composite indicators to simplify complex information, their application for summarising quality should be interpreted very cautiously, particularly where they have not undergone validation across a range of settings and against desired outcomes.6

The need for complementary approaches

The limitations of the SDI notwithstanding, the results of the study by Di Giorgio and colleagues contribute useful general insights on variations in quality of care across the African continent and provide a foundation for further inquiry into the critical determinants of quality of care. Research on public health systems will frequently require multidisciplinary expertise and the application of mixed methods to accurately capture contextual subtleties that are often concealed by a single method of inquiry applied in isolation. There is now broad appreciation, for example, that the solutions to improving quality of care lie beyond building knowledge through conventional training.7 Focus is shifting towards long-term continuous active engagement using embedded approaches with feedback loops to allow for continuous improvement and foster trust, ownership and shared purpose.8 9

Finally, researchers studying the performance of healthcare practitioners must strive to involve frontline providers and where possible, the users of health services right from study design, to data collection, analysis and reporting. The value of inclusive approaches is multifold; not only does it strengthen the validity of the findings, it also promotes uptake and positive change.

References

Footnotes

  • Twitter @AmbroseAgweyu

  • Contributors AA prepared the first draft of the editorial. TM and DM contributed to subsequent versions. All authors reviewed and approved the final manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

  • Data availability statement There are no data in this work.

Linked Articles