Original Article
Decisions about lumping vs. splitting of the scope of systematic reviews of complex interventions are not well justified: A case study in systematic reviews of health care professional reminders

https://doi.org/10.1016/j.jclinepi.2011.12.012Get rights and content

Abstract

Objectives

Lumping and splitting refer to the scope of a systematic review question, where lumped reviews are broad and split are narrow. The objective was to determine the frequency of lumping and splitting in systematic reviews of reminder interventions, assess how review authors justified their decisions about the scope of their reviews, and explore how review authors cited other systematic reviews in the field.

Study Design and Setting

A descriptive approach involving a content analysis and citation bibliometric study of an overview of 31 systematic reviews of reminder interventions.

Results

Twenty-four of 31 reminder reviews were split, most frequently across one category (population, intervention, study design, outcome). Review authors poorly justified their decisions about the scope of their reviews and tended not to cite other similar reviews.

Conclusion

This study demonstrates that for systematic reviews of reminder interventions, splitting is more common than lumping, with most reviews split by condition or targeted behavior. Review authors poorly justify the need for their review and do not cite relevant literature to put their reviews in the context of the available evidence. These factors may have contributed to a proliferation of systematic reviews of reminders and an overall disorganization of the literature.

Introduction

What is new?

Key finding:

  1. Systematic reviews of reminder interventions are frequently “split” with poor justification and do not adequately cite previous systematic reviews.

What this adds to what was known?
  1. Lumping and splitting have been discussed in the literature, but this is the first known investigation of lumping and splitting in a specific area.

What is the implication, what should change now?
  1. Systematic review authors and journal editors should be more aware of lumping and splitting. Authors should properly justify the conduct of their review and provide appropriate rationale for lumping or splitting their review question.

A key issue for systematic review authors when planning their review is to decide the review’s scope, specifically how broad or narrow the question should be, as this will have a substantial impact on the conduct and generalizability of the review [1]. The methodological rationale for undertaking a review with a broad scope (lumping) is that systematic reviews aim to identify the common generalizable features within similar interventions and minor differences in study characteristics may not be important. Whereas the methodological rationale for undertaking a review with a narrower scope (splitting) is that it is only appropriate to include studies which are highly similar in design, study population, intervention characteristics, and outcome recording [1]. Lumping allows the generalizability and consistency of research findings to be assessed across a wider range of settings and study populations which may reduce chance results by increasing the number of studies considered and allowing better judgments about the consistency of observed effects across studies. Lumping also allows for exploration of effects across different interventions, settings, and study populations [1], [2]. However, split reviews have fewer included studies that are likely more homogeneous, which leads to a more manageable review with a higher likelihood of meta-analysis. This allows for a numerical interpretation of the data and a more specific research question (Table 1).

Systematic reviews may be split for feasibility issues if review authors have limited resources or because review authors are interested in a relatively narrow question. Although every systematic review is “split” to a certain degree, the decisions on the extent to which it will be split are sometimes more logical than others. For example, systematic reviews of the effects of clinical treatments could be split by population (e.g., children vs. adults), intervention (e.g., pharmaceutical vs. surgical), comparison (e.g., usual care vs. placebo), and outcome (e.g., mortality vs. quality of life). Usually the choice to split a review is based on considerations that the effects of the intervention would likely vary across the chosen factors and that the expected variation in effect is likely to be clinically significant; thus the results of a lumped review could be potentially misleading. Review authors are commonly able to justify their decision about the scope of their review based on the understanding of: the mechanisms of action of the interventions (e.g., a single drug vs. a class of drugs); the underlying disease processes (aetiological, epidemiological, or prognostic factors); methodological considerations/study designs (e.g., exclusion of nonrandomized studies); or outcomes of interest (e.g., evaluating health outcomes vs. social outcomes). Whereas systematic reviews of complex interventions, such as professional behavior change interventions, can similarly be split (Table 2), there is commonly a weaker theoretical or empirical basis to justify the choice of the factors that are appropriate to split on.

Despite the importance of deciding the scope of a review, there has been relatively little methodological consideration of this nor empirical investigation into how authors choose to lump or split and how they justify their decisions. In this study, we explore current lumping and splitting practices in the context of an overview of systematic reviews of reminder interventions to improve quality of care. The following research questions are addressed:

  • 1.

    How are systematic reviews of reminder interventions “lumped” or “split” according to population, intervention, study design, and outcome?

  • 2.

    How do review authors justify the framing of their research question?

  • 3.

    Are authors putting their reviews in the context of the evidence by citing previously conducted reviews in the same areas?

Section snippets

Inclusion criteria

Systematic reviews were included if they had explicit methods and selection criteria and had a primary focus to evaluate reminder interventions targeting health professionals. Reminder interventions were defined as “patient or encounter specific information, provided verbally, on paper or on a computer screen, designed or intended to prompt a health professional to recall information” [3].

Selection of systematic reviews

As part of an overview of systematic reviews of health care professional behavior change interventions (//www.rxforchange.ca

Description of systematic reviews

We identified 19,265 citations that included 183 systematic reviews evaluating health professional behavior change interventions and 31 evaluating the effectiveness of reminder interventions (Fig. 1). These 31 reviews were published between July 1987 and May 2008 in 19 different journals, ranging from having seven included studies to over 250 included studies in the review and with the contact author most likely being from the United States [4], [5], [6], [7], [8], [9], [10], [11], [12], [13],

Discussion

This descriptive analysis demonstrates that for systematic reviews of reminder interventions, splitting is more common than lumping and most split reviews are split by type of reminder or condition. Review authors poorly justify their decisions about the scope of their review and do not cite relevant literature to put their split reviews in the context of the available research. This leads to a poorly organized field of research.

The issue of lumping and splitting of systematic reviews has been

Acknowledgments

Jeremy Grimshaw holds a Canada Research Chair in Health Knowledge Transfer and Uptake. The Cochrane Effective Practice and Organisation of Care Group is funded by the Canadian Institutes for Health Research (CIHR). The Rx for Change database is funded by The Canadian Agency for Drugs and Technologies in Health, CIHR, and the National Prescribing Service. Alain Mayhew receives salary support from Cochrane Canada, CIHR, and the Ontario Ministry of Health and Long Term Care.

References (36)

  • E.A. Balas et al.

    The clinical value of computerized information services. A review of 98 randomized clinical trials

    Arch Fam Med

    (1996)
  • E.A. Balas et al.

    Improving preventive care by prompting physicians

    Arch Intern Med

    (2000)
  • J.W. Bennett et al.

    Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials

    Med J Aust

    (2003)
  • F. Buntinx et al.

    Influencing diagnostic and preventive performance in ambulatory care by feedback and reminders. A review

    Fam Pract

    (1993)
  • B. Chaudhry et al.

    Systematic review: impact of health information technology on quality, efficiency, and costs of medical care

    Ann Intern Med

    (2006)
  • I. Colombet et al.

    Decision aids for triage of patients with chest pain: a systematic review of field evaluation studies

    Proc AMIA Symp

    (1999)
  • D.A. Fitzmaurice et al.

    Review of computerized decision support systems for oral anticoagulation management

    Br J Haematol

    (1998)
  • A.X. Garg et al.

    Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review

    JAMA

    (2005)
  • Cited by (26)

    View all citing articles on Scopus
    View full text