Article Text
Abstract
The COVID-19 epidemic is the latest evidence of critical gaps in our collective ability to monitor country-level preparedness for health emergencies. The global frameworks that exist to strengthen core public health capacities lack coverage of several preparedness domains and do not provide mechanisms to interface with local intelligence. We designed and piloted a process, in collaboration with three National Public Health Institutes (NPHIs) in Ethiopia, Nigeria and Pakistan, to identify potential preparedness indicators that exist in a myriad of frameworks and tools in varying local institutions. Following a desk-based systematic search and expert consultations, indicators were extracted from existing national and subnational health security-relevant frameworks and prioritised in a multi-stakeholder two-round Delphi process. Eighty-six indicators in Ethiopia, 87 indicators in Nigeria and 51 indicators in Pakistan were assessed to be valid, relevant and feasible. From these, 14–16 indicators were prioritised in each of the three countries for consideration in monitoring and evaluation tools. Priority indicators consistently included private sector metrics, subnational capacities, availability and capacity for electronic surveillance, measures of timeliness for routine reporting, data quality scores and data related to internally displaced persons and returnees. NPHIs play an increasingly central role in health security and must have access to data needed to identify and respond rapidly to public health threats. Collecting and collating local sources of information may prove essential to addressing gaps; it is a necessary step towards improving preparedness and strengthening international health regulations compliance.
- public health
- review
- health policy
- health systems
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Summary box
Existing global frameworks to strengthen public health core capacities lack indicators to measure several preparedness domains and do not provide mechanisms to interface with local intelligence.
In collaboration with three National Public Health Institutes (NPHIs) in Ethiopia, Nigeria and Pakistan, we designed and piloted a rapid framework review and Delphi consultative process to identify, assess and prioritise non-traditional subnational indicators to improve preparedness monitoring.
The demonstrated methodology can strengthen the leadership role of NPHIs in health security without the added burden of developing new indicators or collecting new data.
Introduction
In December 2019, a cluster of patients were admitted to hospitals in Wuhan, China, with an initial diagnosis of pneumonia of unknown aetiology.1 The cluster was epidemiologically linked to a local seafood and wet animal wholesale market, suggestive of zoonotic spill over.2 One thought experiment asks: what would have happened if the vendors at the now-infamous Huanan Seafood Wholesale Market had to send a weekly report to a market inspector containing information on the health of each vendor? And what if one indicator, ‘number of vendors with suspected illness’, was collected by local health authorities on a routine basis? Well, if this would have happened and this ‘non-traditional’ indicator was also monitored by a central public health authority, the outcomes for the 2019 SARS-CoV-2 outbreak could have been very different.
The architecture of many traditional public health monitoring systems was not designed to detect non-human disease specific signals; but it is exactly these signals that can be collected at a local level and then reported upward—that need to be assessed for their utility as part of national preparedness efforts. Without the systems in place to bridge national and subnational capacities for better preparedness, localised health events will remain undetected until the signal becomes loud enough to be picked up by the existing public health infrastructure. Strengthening early detection is essential for the public health entities responsible for preventing, detecting and responding to infectious disease outbreaks; having robust and timely data are the only way to benefit from the extra weeks or days that may be gained from earlier detection and that are so critical to controlling an infectious disease outbreak. There is a wealth of data routinely collected across a range of indicators and using a variety of monitoring and evaluation tools and programmes. These data are not readily accessible and the mechanisms to understand and effectively analyse and use the data in decision-making for national health security and preparedness is lacking.
Limitations of the Joint External Evaluation tool for assessing country preparedness
With wide participation from 113 WHO Member States, the Joint External Evaluation (JEE) has become a meaningful exercise to attune national interests and promote cross-sectoral coordination to strengthen International Health Regulations (IHR) capacities. IHR monitoring framework includes the State Party annual reporting process and voluntary external evaluation using the JEE tools, after-action reviews and simulation exercises.3 A key tool within the WHO’s IHR monitoring framework, the JEE uniquely convenes national actors across sectors and is externally validated by peer country experts. While it elevates the visibility of health emergency preparedness, it is a large resource-intense multi-sectoral exercise that is logistically difficult to perform annually—the recommended frequency is every 4–5 years.4 The challenge is that outbreaks do not stop.
The JEE indicators have been found to accurately measure essential public health functions, such as disease surveillance and laboratory capacity as well as health threats stemming from communicable disease at the national level,5 but the first edition JEE, which has been used most extensively, did not consider much on the subnational capacities, cross-border outbreaks or integrate animal health surveillance data (beyond known zoonotic diseases)—even though the preponderance of emerging infections have zoonotic origins.5 Also, this process convenes national leaders without much representation from subnational levels or the informal and private sectors.6
These limitations can lead to lapses in national health security knowledge and awareness, which results in a skewed understanding of global health preparedness writ large—this has already been noted in other ‘global’ preparedness tools such as the Global Health Security Index.7 The SARS-CoV-2 pandemic is the latest evidence that national preparedness and global health security must be underpinned not only by essential technical capacities but also local multi-sectoral public health intelligence and behavioural health data, which must be accessed and analysed in order for governments to take early action to respond to acute threats and crises.8–12
Local data and National Public Health Institutes
The International Association for National Public Health Institutes (IANPHI) includes membership from National Public Health Institutes (NPHIs) in 99 countries.13 In many contexts NPHIs were first established because of, and in response to, public health challenges typically related to infectious diseases and house the capacities to effective monitoring of national health security and preparedness, including surveillance, evaluating and analysing health information, and epidemiological research.14 In recent times, the breadth of programmes and activities undertaken by NPHIs globally has expanded as they confront new threats and risks to public health, evolve their vision and mandate, and respond to leadership and political priorities.14 Thus, NPHIs are increasingly being positioned as the main agency to monitor, evaluate and report on various aspects of national and subnational preparedness, playing a critical role in global health security.15–17
The structures of NPHIs vary, with many NPHIs existing within Ministries of Health; yet, many have limited access to non-health emergency related data. Even when a national integrated disease system exists, there is still potentially useful data that stays within disease-specific programmes and information systems. The siloed nature of data and the limitations of data sharing are often mirrored subnationally and amplified at the inter-sectoral level. For example, organisations that manage humanitarian crises can provide important information on internally displaced persons, as well as refugee movements.18 19
Very few NPHIs have access to these data. In part due to lack of a mechanism in place to enable the effective use of this data for decision-making.20 COVID-19 has shown us that this fragmentation is a problem for even the best resourced NPHIs, as insufficient data have hampered many countries responses.21 22 If NPHIs are to have robust public health intelligence to detect and even predict disease outbreaks they must be positioned to access, analyse and act on health security relevant data from all relevant sources.23 24
In 2018, a pilot project was developed through the collaboration and input of several NPHIs and partners to strengthen national accountability for preparedness.
Its primary objective was to ascertain if national monitoring and evaluation of preparedness could be strengthened by the identification of priority indicators that are not part of the JEE. Additionally, the pilot sought to identify local indicators collected regularly by non-NPHI entities and test whether NPHIs could access these indicators. The aim was to improve national situational awareness of potential health-impacting events. The objectives of this paper are to detail the methodological approach used to identify and prioritise indicators for the aforementioned pilot project and to provide examples of frameworks and indicators not currently monitored or collected in traditionally used global health assessment tools.
Process for identifying local non-traditional indicators
This section summarises the steps taken to identify a set of locally specific JEE-complementary priority indicators that can be monitored by NPHIs to increase situational awareness for preparedness. To understand the need and utility for these indicators three NPHIs collaborated to pilot these methods: the Ethiopian Public Health Institute, the Nigeria Centre for Disease Control and the Pakistan National Institute of Health. The specific activities to achieve these aims were:
Definition of JEE-gap areas
Rapid review for global gap-relevant indicator-based frameworks.
Identification of national level indicator-based frameworks.
A two-round Delphi process to prioritise indicators for pilot country NPHI monitoring and evaluation plans.
Five a priori JEE-gap areas were applied as parameters to conduct all peer-reviewed and grey literature searches and characterise all outputs (table 1). Gap areas were created and identified through iterative consultative meetings with Chatham House and Geneva Institute, IANPHI and partners, and relevant teams from the WHO Health Emergencies Programme. Additionally, each gap area was considered in light of three crosscutting themes: cross-border coordination, subnational preparedness and One Health. A maximum of two gap areas were selected per NPHI: the first gap area ‘travel and trade’ was selected by the WHO stakeholders in Geneva and the second gap area was selected by the piloting NPHI (ie, ‘knowledge and data sharing’ for Pakistan and Ethiopia and ‘health systems resilience’ for Nigeria). This ensured that the project aligned with both national, regional and global preparedness priorities.
This pilot was executed through a two-stage process to identify and prioritise potential indicators. The Identification and Prioritisation stages were codeveloped between the Project Team and all three NPHIs to decide (i) the criteria to assess the indicators, (ii) the criteria to prioritise indicators, (iii) the stakeholders who should be involved in the process and (iv) what methods should be used to prioritise indicators. The research questions that guide the presented methods and the process to identify and prioritise indicators are further illustrated in figure 1.
Stage 1: identification of frameworks and indicators
Framework search: global-level frameworks (rapid review)
Global gap-relevant frameworks (ie, conceptual document with indicators and/or targets intended to guide data collection to measure outputs and outcomes) were systematically identified using Medline and Google (see box 1 for search terms used). Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were applied to Medline results and only the first 50 results from Google were reviewed. Abstracts were reviewed, and in some cases the first page and methods were screened for relevance. Inclusion and exclusion criteria were applied to the search results (box 2).
Medline and Google search terms
Primary terms: health security OR health systems.
Secondary terms: monitoring and evaluation OR indicators OR monitoring framework OR monitoring tool OR framework OR tool OR assessment.
Tertiary terms: political and financial commitment OR health systems resilience OR research and development OR knowledge and data sharing OR trade and travel restrictions.
Inclusion and exclusion criteria for frameworks
Inclusion criteria
Published within the last 10 years.
Contains quantifiable measurable indicators.
Relevant to at least one of the five gap areas and at least one cross-cutting theme.
Able to be collected within country.
Must be responsive to an outbreak, that is, possible to set a baseline, and make it possible to declare an outbreak.
Exclusion criteria
Theoretical indicators; data must have been shown to be collectable and/or has been implemented before.
Searches were supplemented purposively by frameworks known to be used and implemented by key institutions involved in global health security and preparedness (eg, WHO, World Bank). Experts from institutions with interests in preparedness for global health security were also consulted.
Framework search: national level frameworks
National level frameworks were identified by first compiling a list of ‘common’ frameworks regularly and routinely used by countries to monitor the capacity of the health system to prepare for, prevent, detect and respond to public health threats. Furthermore, resource websites, such as the MEASURE Evaluation Health Information Systems Strengthening Resource Centre were reviewed.25 These lists were then adapted and supplemented with gap-relevant framework recommendations provided by experts and key stakeholders in each pilot country, including health and non-health actors, government and private entities, and academic institutions.
All frameworks were analysed using a standard template to extract information about authorship/ownership, year of publication, source of framework, inclusion of (measurable) indicators and relevance to at least one of the five gap areas and one of the three cross-cutting themes.
Indicator extraction
Two members of the Project Team reviewed the national frameworks and datasets, and extracted indicators based on an assessment of their relevance to: preparedness, global health security, the gap areas of interest to the NPHI, and the WHO-selected ‘trade and travel restrictions’ gap area. For this project, relevance is defined as the appropriateness of an indicator for informing (sub) national monitoring of preparedness information.26 27 An indicator is relevant if it (i) reflects the ability to monitor local/subnational data sources for national (domestic) and (ii) is highly applicable to public health security and selected preparedness gap area. Additionally, to guide alignment to national realities within the gap area, indicators were categorised into preparedness domains and subdomains (table 2).
Stage 2: prioritisation of indicators
Between January and March 2019, a two-round modified Delphi process, similar to Boulkedid et al,28 was implemented to capture expert input for the selection and prioritisation of indicators for each gap area.
Delphi round 1: indicator assessment (to narrow selection)
To assess the validity (For this project, validity is defined as the degree to which the data measures what the indicator claims.29 An indicator is valid if there is (i) adequate evidence and professional consensus to support it and (ii) identifiable benefits to providing information on events that have potential meaningful impact on human or animal health.) and feasibility (For this project, feasibility is defined as if the information needed to assess is likely to be available at the data source provided.) of the indicators, a peer-review panel was formed of representatives from the three pilot NPHIs, the Project Team and three non-pilot NPHIs: the Robert Koch Institute (Germany), the National Institute for Public Health and the Environment (Netherlands) and the Norwegian Institute of Public Health. In total there were 12 panel members, all who were experts in preparedness and surveillance within their respective NPHIs. The indicators were grouped by subdomain and country and divided among panel members. Indicators were scored on validity and feasibility using a 9-point Likert scale, where a score of ‘1’ is ‘definitely not valid’ or ‘definitely not feasible’ and a score of 9 is ‘definitely valid’ or ‘definitely feasible’. Indicators that scored in the top two tertiles (ie, 4–6 and 7–9) passed the first round of Delphi and were included in the second round. An example of the spreadsheet used during Delphi round 1 is provided in online supplemental appendix 1.
Supplemental material
Delphi round 2: indicator prioritisation
In collaboration with each NPHI, an in-country workshop was held to prioritise the selected indicators within each of the gap areas. The aim of the workshop was to prioritise gap-area indicators that could be used to inform NPHI monitoring and evaluation objectives and strategies, to strengthen preparedness monitoring and health system readiness. The workshop convened a variety of stakeholders from across the public and private sectors, in health and non-health fields, and experts in sectors related to the country’s selected gap areas, online supplemental appendix 2 details which expert types were requested and represented. For the second round Delphi panel there were 21 experts in Ethiopia, 25 in Nigeria and 35 in Pakistan. Contextual insights from the country prioritisation workshops are provided in box 3.
Supplemental material
Contextual insights from country indicator prioritisation workshops
Every country has its specific context and cross-sectoral relationships, which impact what indicators their national public health institution will monitor for preparedness. The three pilot countries ensured that their local contexts and political realities influenced how they selected and prioritised indicators for this project. Here are contextual insights from the Delphi round 2 country prioritisation workshops:
Ethiopia
Ethiopia selected the gap area knowledge and data sharing due to their focus on public health emergency management and newly established data management centre. Beyond coordinating and synthesising health information at a national level, the centre ultimately aims to generate evidence to inform national health policies and programmes. Participants shared that climate change indicators were a national priority due to the impact on livelihoods and migration, and selected indicators that could provide information from Woreda (district) level from the National Meteorological Agency and then potentially shared with public health information systems.
Nigeria
Nigeria selected the gap area health systems resilience due to the volume of disease control and response efforts annually undertaken by Nigeria CDC. As a federated country, these efforts should be led by both national and state public health entities, but due to low investments in subnational health systems, this was often not the case. Participants discussed if it was possible to track both health expenditure as well as health budget allocation indicators. Ultimately, it was decided, that it was more feasible to track budgets and workforce capacity in order to monitor health system investments at state and national levels.
Pakistan
Pakistan selected the gap area knowledge and data sharing due to the focus on a new legislation, under the process of approval (the National Public Health Act), which formalises data sharing responsibilities. At the time of the project, a newly integrated disease surveillance programme had been established but the data management system was still nascent and many existing systems had yet to integrate or exchange information. Thus, the participants focused on indicators across sectors that provided information on the completeness and reach (eg, rural areas) of reporting systems as well as indicators that measured progress in digital health information systems.
Participants in the workshop were arranged in multi-sectoral groups and were tasked with conducting an assessment on all indicators that had passed the first stage of the Delphi process by applying predefined criteria that was defined in collaboration with pilot country leads (online supplemental appendix 3). The participant groups were further asked to identify which indicators should be labelled as ‘core’ indicators—these would be included into the monitoring and evaluation framework for their NPHI and tested during a subsequent part of the project. The scoring template used for Delphi round 2 is provided in online supplemental appendix 4.
Supplemental material
Supplemental material
Non-traditional preparedness data: where are they and what can they tell us?
Global search findings
Before screening, over 200 frameworks were identified and after the screening 37 frameworks were kept for analysis. The selected frameworks represent many diverse sources where data may be held, including government agencies, vertical disease programmes, academic institutions, non-government organisations, international donors and multilateral organisations, and private health or non-health industries (table 3).
Similar numbers of frameworks were identified across global (n=10), regional (n=8) and national levels (n=19). Global-level frameworks were developed by a ‘global’ institution such as multilateral organisations, and had been recommended for all countries, for example, The Commonwealth’s’ Health Protection Policy Toolkit Health as an Essential Component of Global Security.30 Regional frameworks provided indicators to be used in countries from the same geographical block, for example, Integrating Financing for Health Security in East Asia Pacific Region Concept Note.30 Several frameworks (n=14) had indicators that measured multiple gap areas.
Across the five gap areas, ‘health systems resilience’ had the highest number of frameworks (18/37) followed by ‘political and financial commitment’ with 13 frameworks (figure 2). ‘Research and development’ and ‘knowledge and data sharing’ both had nine frameworks. ‘Travel and trade restrictions’ had the least number of frameworks with four, and with identified frameworks only at the regional and national levels. For the cross-cutting themes, 21 frameworks included indicators for ‘subnational preparedness’ data, followed by ‘cross-border coordination’ (n=10) and ‘One Health’ (n=9). While distribution was similar across gap areas at both the global and national levels, at the regional level only one ‘One Health’ framework was identified.
While many of the frameworks feed data back into a health system most do not integrate with existing public health information systems, for example, the Geopolitical OECD Health Care Quality Framework on Health System Performance,31 which provides indicators useful to assess healthcare capacity such as Chronic Obstructive Pulmonary Disease hospital admission and healthcare human resource numbers.
Several one-time frameworks were also identified; these were often generated from publications, projects or initiatives from external government entities, such as professional networks or universities. The Laboratory Scorecard32 to score national laboratory network functionality in resource-constrained countries is an example of this. This framework included targets that encompassed regulatory frameworks, biosafety and biosecurity, and supply chain management, among others.
Many potential indicators came from reporting tools and surveys for either broad health information or disease specific initiatives, for example, The Global Fund Concept Note, which is generated ever 3 years for eligible countries33 and requires information on the epidemiology of HIV, tuberculosis and/or malaria, geographical health burden and health system indicators. These data are often compiled by the respective disease programmes and housed within their data systems. Undoubtedly, readily available data on vulnerable populations would provide important knowledge for national strategic disease outbreak preparedness and response.
Framework and indicators at the national level
A total of 37 discrete frameworks were identified in Ethiopia, 28 in Nigeria and 40 in Pakistan. For each country, framework figures include the 19 common frameworks that were also found during the global-level search (table 4), but do not include separate reports from the implementation of any framework in different states or provinces. All countries also identified some national frameworks that included indicators for other gap areas but not the priority ones.
In terms of framework distribution across country-selected gap area: in Ethiopia, 32% (n=12) and in Pakistan 40% (n=16) of identified frameworks had indicators to collect ‘knowledge and data sharing data’. In Nigeria, 45% of frameworks had indicators for ‘health system resilience’ data. The number of frameworks with WHO-selected gap area ‘trade and travel restrictions’ were relatively low in Pakistan (n=4, 10%) and Ethiopia (n=8, 22%). However, in Nigeria, 13 frameworks (45%) with indicators relevant to ‘travel and trade restrictions’ were identified.
From these frameworks, a total of 120 indicators were identified for Ethiopia, 176 for Nigeria and 62 for Pakistan. These indicators were reviewed for relevance and duplicates by a second reviewer, leaving 86 indicators in Ethiopia, 87 indicators in Nigeria and 51 indicators in Pakistan that were included in the first Delphi panel. After the first Delphi round, 76% (n=65) of indicators were retained in Ethiopia and 68% (n=59) in Nigeria for the second Delphi round. In Pakistan all indicators remained after the first round. These indicators went on to the second round of Delphi (the in-country prioritisation workshops) (figure 3).
Each country resulted in prioritising between 14 and 16 indicators (table 5). (As part of the pilot exercise, each country was instructed to choose no more than 16 indicators. This was so the other components of the project, ie, trying to access indicators, could be completed within project timelines.) These indicators were seen as core to effective monitoring for preparedness. Prioritisation of indicators by local experts revealed that priority indicators are country and context specific and thus this process was most useful in selecting indicators to assess national preparedness strength and not to compare countries.
The variety of indicators revealed several interesting local sources for data including the National Meteorological Agency in Ethiopia and the Media Regulatory Authority in Pakistan. At least 50% of Nigeria and Ethiopia’s indicators could be collected at the subnational level (green coloured boxes in table 5), while just 38% of Pakistan’s could. Certain themes emerged in the types of indicators that all three countries prioritised. These include (i) private sector data (especially private laboratories); (ii) subnational capacity to respond to public health threats; (iii) availability and capacity for electronic surveillance tools and systems; (iv) timeliness of routine data at the subnational level; (v) data quality scores and (vi) data related to internally displaced persons and returnees.
Conclusion: next steps to strengthen NPHIs’ role in national health security
By combining a rapid framework review and systematic consultative process to assess indicators, we have demonstrated a methodology that can be used to identify non-traditional indicators to improve national monitoring for preparedness. This process can strengthen the NPHI’s role as an established authority for health security. This approach was designed to optimise the role of the NPHI without adding the burden of choosing new indicators or collecting new data.
Among the insights already discussed, the absence of ‘travel and trade restrictions’ frameworks available at all levels is an important finding and perhaps provides some explanation for the international confusion and the lingering irresolution regarding IHR compliance during the early response to the COVID-19 pandemic.34
There were some limitations to our process. The JEE-gap areas presented in this paper reflect expert feedback; these do not represent comprehensive gaps in public health preparedness and there is a need for further research to identify others. We used a rapid review to identify global level frameworks and a convenience sample of informants for national level frameworks; this most likely resulted in missed frameworks and indicators—this was especially noticeable in regards to the lack of granular local data (eg, below health facility). This process can also be time-intensive. NPHIs may use our key words and identified frameworks to reduce time or do stage 1 and stage 2 separately and as needed. National level framework identification was heavily influenced by existing relationships held by the NPHI or Project Team. While all countries had very little involvement from the travel and trade industry, Nigeria identified a higher proportion of indicators for that gap area since it had recently completed the WHO’s Strategic Tool for Assessing Risks.35 Thus, to expand their reach of frameworks within gap areas, countries should leverage existing multi-sectoral assessments as well as consider local datasets identified by district or provincial/regional sources. Finally, the second edition of the JEE was released in 2019 and has expanded in some areas, namely subnational inclusion and zoonotic surveillance.4 Although, its frequency and much of its indicators remain unchanged and so our process would still be beneficial for regularly monitoring preparedness at a national level.
The process that we detailed is only a starting point to finding data within countries to better inform national health security. The present COVID-19 pandemic has demonstrated that political will can outweigh even the most informed public health process. Other questions remain such as: is the NPHI able to collect this data? When and how often should this data be collected? And how can the selected indicators inform decisions for better national preparedness? In another publication20 we address these questions.
IANPHI is supporting NPHIs to become the established lead in health security in their countries. The above methodology has been translated into a toolkit to support NPHIs. A way forward could also include a common framework to use non-traditional data into a routine analysis for public health intelligence; the WHO Benchmarks for IHR could be useful to this aim.3
Global health security must empower localised preparedness and targeted response activities. Therefore, national health security must routinely monitor local data. There is no better time to start.
Acknowledgments
This project was conducted as part of the Strengthening Accountability and Preparedness for Global Health Security (SNAP-GHS) project. We a grateful for approval of this study and input from the following NPHIs: Ethiopian Public Health Institute, Pakistan National Institute of Health, Nigeria Centre for Disease Control, Public Health England, Robert Koch Institute, Norwegian Institute of Public Health, and Netherlands National Institute for Public Health and the Environment. We also acknowledge the WHO Health Emergencies Programme and the International Association of National Public Health Institutes for their advice and support during the project.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Handling editor Seye Abimbola
Twitter @DrMishalK, @lara_hollmann
Contributors NAE, MK, AR-S and OD conceptualised the study. NAE, MK, AR-S, EAgogo, AM and TRR implemented the study under the supervision of CI, EAbate and AI. NAE and AR-S drafted the initial text, figures and tables. The text was revised and edited significantly by all authors.
Funding Public Health England funded Chatham House to conduct this study.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.