Intended for healthcare professionals

Editorials

Lockdown-type measures look effective against covid-19

BMJ 2020; 370 doi: https://doi.org/10.1136/bmj.m2809 (Published 15 July 2020) Cite this as: BMJ 2020;370:m2809

Linked Research

Physical distancing interventions and incidence of coronavirus disease 2019

Read our latest coverage of the coronavirus pandemic

  1. Thomas May, Floyd and Judy Rogers endowed professor
  1. Elson S Floyd College of Medicine, Washington State University, WA, USA
  1. thomas.may{at}wsu.edu

But evidence is undermined by unreliable data on incidence

The linked paper by Islam and colleagues (doi:10.1136/bmj.m2743) provides important preliminary evidence for the effectiveness of physical distancing (referred to by some as social distancing) measures in controlling the coronavirus disease 2019 (covid-19) pandemic, including closures of schools and workplaces, restrictions on mass gatherings and public events, and restrictions on movement (lockdowns).1 This supporting evidence is desperately needed as these measures are challenged around the world.

The greatest strength of this study is its reliance not on hypothetical modeling but on actual data. Although the use of some modeling techniques remained necessary, for example, to establish “controls” specific to each country, primary data reflected actual test results. Unfortunately, using such results is also the study’s greatest weakness, making analysis dependent on the quality of the data from testing. Specifically, the authors relied on “daily reported cases” compiled from 149 independent countries; data subject to variable quality, accuracy, and inconsistent testing practices.

As a result, caution is warranted when interpreting the findings. These flaws are not the fault of the authors, who have done admirable work with the information available. But the collection and reporting of test data by regional and national authorities does not reflect the same commitment to scientific rigor, as evidenced by the authors’ long section on limitations of the study. In particular, a lack of coordination and standardization in both testing and reporting has undermined the reliability of the authors’ conclusions, despite high quality analyses.

For example, data on testing in the United States has been a less than ideal. An internal investigation of testing kits by the US Department of Health and Human Services raised serious questions about accuracy and about variable sensitivity and specificity for different versions of the test approved and distributed at different time points.2 Even data on total incidence from the Centers for Disease Control and Prevention—once the gold standard globally for infectious disease surveillance—have failed to properly distinguish antibody testing from testing for active disease,3 corrupting the value of data for scientific purposes. Because of the failure to implement a coordinated, consistent testing strategy, changes in numbers of cases might simply reflect changes in testing practices rather than the effects of an intervention on the incidence of covid-19.

This is true not only for “total diagnosed cases” but also for incidence rates. For example, early shortages of testing kits in the US led to testing restricted to only people with symptoms or those with known exposure to covid-19. Once testing expanded beyond these groups (that we had reason to believe would test positive), we would naturally expect the ratio of positive test results to tests administered to fall (table S2 in the linked paper). Because of poor coordination, supervision, and consistency of testing strategies across the US, it is impossible to know how we might accurately account for variable testing practices in any analysis of covid-19 incidence. Little consistency has been shown in testing practices even within local testing sites, let alone between such sites.

Important aspects of the design and conduct of testing strategies are often out of the control of public health agencies. In the US, local public health agencies are notoriously underfunded, understaffed, and dependent on the availability of tests and other resources.4 The role of these agencies is not to assemble high quality data for study but to respond to changing public health needs with whatever resources are available. It is the responsibility of politicians and policy makers to harmonize testing and reporting strategies for covid-19—at a national or, ideally, international level—so that data on incidence are meaningful, comparable, and useful for evaluating the effectiveness of pandemic responses. Whatever the reason for variation in testing practices, the result is that data accrued so far are inadequate for use in scientific evaluations of physical distancing measures.

The study by Islam and colleagues provides support for physical distancing but cannot be definitive for the reasons outlined. The fact that effectiveness is discernible across so many different data collection strategies and locations (individual countries) is, however, strongly suggestive of the effectiveness of these measures.

Their results might be viewed in the same way as preliminary data used in grant applications: the evidence so far is suggestive of a particular conclusion but not good enough to rely on, so a more rigorous study is needed. The study’s conclusions are probably correct, but we cannot know this for certain from case data collected so unsystematically by countries around the world.

Control strategies informed by flawed data might advance public health aims in the short term but do lasting damage to our ability to effect behavioral change through evidence in the future. We must be careful, then, not to mislead or overplay politically convenient findings and risk violating the public trust necessary for an effective pandemic response.

This study is as good as it could be given the data available, but it would have been so much more valuable had “daily reported cases” been underpinned by meaningful data on testing. Calls for a coordinated, global public health infrastructure for a pandemic response have been growing for decades.5 Only by acknowledging our failures in systematic testing and data collection can we learn from our mistakes and avoid repeating them.

Footnotes

  • Research, doi: 10.1136/bmj.m2743
  • Competing interests: We have read and understood BMJ policy on declaration of interests and declare the following interests: none.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

This article is made freely available for use in accordance with BMJ's website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by BMJ. You may use, download and print the article for any lawful, non-commercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained.

https://bmj.com/coronavirus/usage

References