Article Text

Download PDFPDF

Making the world a simpler place: the modeller’s temptation to seek alternative trial results
  1. Tim Colbourn1,
  2. Audrey Prost2,
  3. Nadine Seward3
  1. 1 UCL Institute for Global Health, London, UK
  2. 2 Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, London, UK
  3. 3 Centre for Implementation Science, Department of Health Service and Population Research, King’s College London, London, UK
  1. Correspondence to Dr Tim Colbourn; t.colbourn{at}ucl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary box

  • Modelled results should not be prioritised over empirical trial results when these are available.

  • Researchers need to consider theories of intervention effect, or lack of effect, and the roles of context when modelling results.

  • Increasing coverage of interventions may not lead to reductions in mortality.

Murray et al suggest, via sophisticated mathematical modelling, that a radio intervention saved thousands of lives in Burkina Faso because it increased care-seeking for childhood illnesses, and that it could save tens of thousands more if scaled up across sub-Saharan Africa.1 2 In this commentary, we examine Murray et al ’s methods, remind readers that the Burkina Faso trial found no effect on child mortality and argue that privileging modelled over actual empirical data is both questionable and unnecessary.

We have five concerns about the methods used in Murray et al. First, the authors ignore empirically recorded mortality data from the trial’s control arm, and instead choose to use a modelled estimate of higher mortality relative to the intervention arm. Second, the models assume that all children seeking treatment for pneumonia received appropriate care and oral antibiotics. We know this is often not the case and is even less likely when a health system is strained by sudden increases in care-seeking.3 Third, there is no discussion of the significant increase in consultations for ‘other diagnoses’ in the trial’s control arm. This indicates that the intervention significantly decreased consultations for ‘other’ diagnoses and calls into question the modelled mortality estimate. Fourth, the ‘compression’ method—used to account for multiple diagnoses in the same child and allow only one primary diagnosis—seems to favour malaria as the leading cause of death, which is responsible for much of the modelled mortality reduction. Finally, although the 5.5% modelled mortality reduction estimate for the third year has a negative lower bound (95% CI −0.1% to 13.1%), the authors report that the intervention saved between 239 and 1554 lives that year.1 These assumptions and possible errors together are likely to have inflated the estimated number of lives saved.

Leaving these methodological concerns aside, Murray et al’s work poses an interesting conundrum. Should modelled effects of an intervention on child mortality take precedence over actual, empirical mortality data from a randomised controlled trial (RCT)? Should every team with an underpowered trial now use secondary outcomes to model the effects of its interventions on distal health endpoints? The RCT found that the radio intervention’s effect was compatible with the 7.1% reduction in under-5 mortality estimated through modelling, but also that its most likely effect on child mortality was… nothing (rate ratio: 1.00; 95% CI 0.82 to 1.22, p>0.999).2 Null RCT results do not normally warrant calls for continent-wide scale-up. Surprisingly though, 21 news media outlets, including Reuters, BBC and CNN, reported that thousands of lives have been saved through the radio intervention.4 5 Fortunately, the BBC Media Action Trust have since questioned the modellers’ strong claims.6 The use of modelling methods on underpowered trials showing no evidence of effect on primary outcomes is a slippery slope. How many other interventions might this be done for, and when? When trials have an effect size of 1.00 (as here)? 1.10 perhaps?

There are many reasons why the trial may not have detected an effect on mortality, besides lack of power.7 Poor health system capacity and quality of care may mean that increased utilisation does not actually reduce deaths. The authors even note this possibility, but do not consider that increased utilisation itself could result in greater shortfalls as health facilities struggle to meet demand.8 Theoretically and practically, providing information and getting people to come to health facilities are only part of what is required. Reducing child mortality also requires strengthening health systems to meet demand in a timely, safe and equitable manner. To expect an effect on child mortality from radio messages alone is optimistic. To then create an effect via modelling when none was observed in a cluster RCT is puzzling. Going on—via the accompanying cost-effectiveness analysis9—to label the radio intervention ‘the second most cost-effective intervention to save children’s lives ever’ is simply bizarre.8 The available resources to tackle important problems like child mortality are too small to spend on interventions that are unlikely to work, at least on their own.7

In a recent insightful commentary in this journal, Pai et al highlight that we are often surprised when interventions that lead to improvements in surrogate endpoints (eg, care-seeking) do not lead to lives being saved.10 For example, a recent RCT of the WHO Safe Childbirth Checklist in India found that birth attendants in facilities participating in the programme were more likely to adhere to safe practices, but no overall reductions in maternal and perinatal mortality.11 We should not be surprised. Like Rutter et al,12 Pai and colleagues remind us that interventions are events that ‘slot’ into existing health and social systems and interact with them in complex ways.10 A safe birth checklist may help ensure critical tasks are done during childbirth, but it will not influence whether a pregnant woman reaches a facility on time, or whether a facility has drugs and equipment. The expectation that interventions which target discrete steps in the continuum of care—like radio messages or a checklist—can alone lead to reductions in mortality is likely unrealistic. Such reduction requires many steps occurring together. Incorporating a theory-based process evaluation13 or realist evaluation14 examining how the radio intervention’s effects may have interacted with the health system and other contextual factors would have been useful to explain why the radio intervention alone was not able to reduce mortality. Such work would also enable us to understand the role mass media could play in more complete, real-world solutions to reduce child mortality.7

Pai et al encourage us to be both strategic and honest in our use of surrogate endpoints10: some interventions (like the radio messages) are specifically developed to influence them. Moreover, health systems factors will always influence pathways to more distal health outcomes. Celebrating the increase in care-seeking and acknowledging that, in the context of the Burkina trial, no mortality reduction is more plausible than thousands of lives saved is a good step towards honesty.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.

Footnotes

  • Handling editor Seye Abimbola

  • Contributors TC wrote the first draft of the commentary which was improved by AP and NS. All authors reviewed and agreed with the final version.

  • Competing interests None declared.

  • Patient consent Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

  • Data sharing statement No additional data are available.

Linked Articles