Article Text
Abstract
Introduction The exceptional production of research evidence during the COVID-19 pandemic required deployment of scientists to act in advisory roles to aid policy-makers in making evidence-informed decisions. The unprecedented breadth, scale and duration of the pandemic provides an opportunity to understand how science advisors experience and mitigate challenges associated with insufficient, evolving and/or conflicting evidence to inform public health decision-making.
Objectives To explore critically the challenges for advising evidence-informed decision-making (EIDM) in pandemic contexts, particularly around non-pharmaceutical control measures, from the perspective of experts advising policy-makers during COVID-19 globally.
Methods We conducted in-depth qualitative interviews with 27 scientific experts and advisors who are/were engaged in COVID-19 EIDM representing four WHO regions and 11 countries (Australia, Canada, Colombia, Denmark, Ghana, Hong Kong, Nigeria, Sweden, Uganda, UK, USA) from December 2020 to May 2021. Participants informed decision-making at various and multiple levels of governance, including local/city (n=3), state/provincial (n=8), federal or national (n=20), regional or international (n=3) and university-level advising (n=3). Following each interview, we conducted member checks with participants and thematically analysed interview data using NVivo for Mac software.
Results Findings from this study indicate multiple overarching challenges to pandemic EIDM specific to interpretation and translation of evidence, including the speed and influx of new, evolving, and conflicting evidence; concerns about scientific integrity and misinterpretation of evidence; the limited capacity to assess and produce evidence, and adapting evidence from other contexts; multiple forms of evidence and perspectives needed for EIDM; the need to make decisions quickly and under conditions of uncertainty; and a lack of transparency in how decisions are made and applied.
Conclusions Findings suggest the urgent need for global EIDM guidance that countries can adapt for in-country decisions as well as coordinated global response to future pandemics.
- COVID-19
- health policy
- public health
- QUAlitative study
Data availability statement
No data are available. Data are unavailable based on approval conditions of ethical boards.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
What is already known on this topic
Prior work largely explored pandemic decision-making processes in individual country contexts and the relative influences of different policy inputs, including scientific evidence.
However, less attention has been paid to the ways experts navigate and weigh diverse characteristics and constraints of evidence itself during pandemic response, including those relatively unique to this context (e.g., rapidly emerging or evolving evidence).
What this study adds
Leveraging the experiences of global scientific experts advising policy-makers during the largest pandemic in recent history, our study findings have implications for future approaches to adapting evidence-informed decision-making (EIDM) processes to meet the complexities, urgencies, and inequities exacerbated during pandemics and public health emergencies.
How this study might affect research, practice or policy
Study findings can inform the development of pandemic EIDM guidance to harmonise global approaches to pandemic control, and spur national and multilateral investments in systems and infrastructure that promote transparent, ‘science-led’ pandemic decision-making.
Study findings also indicate the need for additional attention to the following research areas: how science advisory bodies are created, structured and integrated into pandemic EIDM processes; how various forms of evidence are weighed in pandemic decision-making; the role of science in equitable pandemic response; and the implementation and evaluation of approaches to reduce ‘evidence inequities.’
Introduction
Evidence-informed decision-making (EIDM) relies on the best available scientific evidence, systematic use of data and information systems, community engagement in decision-making, application of programme planning and knowledge creation frameworks, conduct of sound evaluation, and dissemination of lessons learnt as a basis for decision-making.1 2 The ability to generate policy that is evidence-informed relies on the assumption that research can adequately explain complex societal phenomena. This assumption has been criticised for its inordinate inflation of science’s ability to comprehend the causes of, mechanisms behind, and solutions to complex, real-world problems.3 Moreover, it often implies that findings generated in one context will be generalisable to alternative contexts and communities; that policy problems are discrete, time-constrained issues that can be explored by well-designed empirical research and that evidence will point to unproblematic solutions.3 4
In the context of pandemics, rapidly evolving research environments, as well as the at times contentious sociopolitical context in which public health policy decisions must be made, create unique challenges for EIDM processes.5 6 Recent research and discussion specific to COVID-19 pandemic decision-making highlight concerns associated with the timing of response decisions, confusion and missed opportunities regarding levels of government responsible for and involved in decision-making, capacity to respond, communication around COVID-19-related guidance, and the use of discourse among authority figures to justify singular COVID-19 response options without attention to alternative policies.7–10 However, this research has fallen short of exploring specific challenges to EIDM as it relates to assessing, interpreting and translating evidence and advising pandemic decision-making from the perspective of those providing evidence. Moreover, little work has been done that takes a global perspective of the considerations facing EIDM in the context of pandemics, including issues concerning available evidence for decision-making.
Given the unprecedented breadth, scale, and duration of pandemic EIDM, the COVID-19 response provides an opportunity to understand the challenges associated with insufficient, evolving and/or conflicting evidence as it is evaluated and integrated into COVID-19 decision-making. Thus, this study was designed to critically explore the challenges interpreting and translating evidence experienced by those providing advice in pandemic contexts, particularly around non-pharmaceutical control measures, from the perspective of experts from various countries and settings advising policy-makers during the COVID-19 pandemic. The objective of this paper is to describe and discuss the challenges reported by advisors, who include scientific experts across a range of disciplinary perspectives, regarding available evidence for informing decisions and challenges with integrating evidence to inform pandemic decision-making. Our work is guided by the following research questions:
How do experts and advisors reason with evidence that is insufficient, evolving and/or conflicting?
How do advisors integrate and adapt evidence to inform pandemic decisions in their context?
Methods
Study design
In this study, we employed semistructured qualitative interviews with experts who advise decision-makers on COVID-19 policy. The Lomas framework, which emphasises the multidirectional processes for integrating evidence into policy-making processes, as well as the importance of context-specific decision-making,11 12 has been previously used to analyse evidence-informed pandemic decision-making during H1N1.13 The framework provides a way of capturing how values and information are integrated by institutional structures for decision making, including by advisors which are the focus of our study, to inform policies or address a social problem or issue. Qualitative interviews are an appropriate method for critically examining these areas of interest to the study, namely challenges to EIDM, since they enable an in-depth examination into critical nuances, contextual factors, individual experiences, values and beliefs, and motivations of individuals to enhance our understanding of a particular problem, topic, or area of exploratory inquiry.14–16
Sampling
Interviewees were determined for inclusion through a multistep process that first involved a purposive sample of potential interviewees (identified via coauthors’ professional networks and the WHO Social Sciences Research Roadmap Workgroup). Study leads (JV and NAE) created a primary and secondary list for outreach based on potential interviewees’ relevant expertise regarding the study’s objectives as well as representation across WHO regions and countries. We selected participants based on their role in advising COVID-19 decision-making with an emphasis on representation both across WHO regions and country-level decision-making. Notably, some interviewees advised decisions at subnational and/or national levels. In total, we attempted to recruit 49 experts. Of the 22 who did not participate, 16 of these were non-responses or no-shows, 5 shared that they did not have availability and 1 explained that they did not feel well positioned to answer the interview questions. Three of these individuals were based within the AFRO WHO region, 10 within PAHO, 8 within the EURO region and 1 from the WPRO region. Their roles in informing COVID-19 response ranged in scale from subnational, national and regional/international.
Data collection
We conducted in-depth, one-on-one videoconference or teleconference interviews with 27 experts within four WHO regions, representing 11 countries (see table 1). These individuals include scientific experts and advisors who are/were engaged in evidence-informed COVID-19 decision-making activities in lower-income and middle- income countries (LMICs) and high-income countries (HICs). Interviewees held a range of positions and operated at various, and at times multiple, levels of governance from local/city (n=3), state/provincial (n=8), federal or national (n=20), to regional or international (n=3) and university-level advising (n=3) (see table 1). They represented a variety of scientific disciplines, such as epidemiology, biology, infectious disease, immunology and anthropology, all with several years of experience in their current position and/or prior related roles (eg, informing public health decisions).
We created a semistructured interview guide, (online supplemental appendix A), which was developed a priori according to the study objectives and related Lomas framework dimensions. The guide included introductory questions to understand interviewees’ backgrounds, their engagement with policy-makers and their roles in COVID-19 pandemic response, structures and processes they were involved in to inform decision-making, types of evidence used to inform decisions, as well as limitations of evidence and barriers to integrating evidence into decision-making. To focus the interviews on EIDM around community-based non-pharmaceutical interventions (NPIs), we provided specific examples to interviewees about school closures, mask ordinances, and/or stay-at-home and lock down ordinances, though we indicated that they could speak to other examples as relevant. All interviewees were directly or indirectly involved in informing one or more of these ordinances.
Supplemental material
All interviewees agreed to be recorded and consented to participate following narration of a verbal informed consent document that was also provided via email before the start of the interview. Following our informed consent language, interviewees and their respective affiliations have been kept anonymous. All interviews were conducted in English and took place from December 2020 to May 2021, lasting between 30–90 min.
Data analysis
Interview recordings were professionally transcribed and reviewed for quality and accuracy. We generated a two-page to four-page summary of each interview, a majority of which were shared with interviewees to review for accuracy of interpretation. An initial coding scheme was created based on the interview guide, which included questions aimed at capturing interviewees’ roles in informing pandemic EIDM, the types of evidence they used to advise decisions, challenges and lessons learnt regarding accessing and incorporating evidence as well as the Lomas framework to explore contextual influences on the decision-making process.11 Before finalising the initial coding scheme, we familiarised ourselves with the interview data and created summaries of each interview, identifying patterns and generating additional codes as appropriate. (See online supplemental material for the interview guide and final coding scheme, which includes code names and definitions.) All transcripts were then coded using NVivo qualitative analysis software (V.12).
To promote reliability of the codebook instrument, two researchers cocoded two transcripts using the initial coding scheme. Code application was compared using the intercoder query function in NVivo V.12. For codes with low levels of agreement (below 90%) and/or Kappa scores below the ‘moderate’ threshold determined by Cohen’s suggested kappa result categories (0.41–0.60), we discussed these codes, refined definitions and coding directions accordingly, and reapplied to the first two transcripts until we achieved satisfactory kappa and agreement scores. The coding scheme was subsequently finalised (online supplemental appendix B), and one research team member applied the scheme to the remaining 25 transcripts. For each high-level code concerning evidence integration, challenges, lessons learnt and COVID-19 advisory structures, a summary memo was developed to synthesise key themes and identify illustrative quotes. Coded text was further reviewed and synthesised to identify common challenges that interviewees reported, both explicitly and implicitly, for informing and advising pandemic EIDM processes. The most commonly identified challenges were entered into a matrix where summary information by HICs, LMICs and level of decision making was recorded to further explore common elements, context-specific issues and counterpoints. Further, the number and demographics of key informants reporting the challenge was recorded to gauge theme saturation and demographic-specific issues.
Patient and public involvement
Our study did not involve enrollment of patients. The individuals at the focus of this study (advisors for COVID-19 decision making) were not and will not be involved in the design, conduct, reporting or dissemination plans of our research.
Findings
Interviewees reported participation in a range of advisory structures, including independent advisory groups, panels, or tables created in response to COVID-19, outside consultancy firms/partnerships, university-level advisory committees, as well as existing public health advisory structures within government. Processes for integrating evidence ranged from preparation of research briefs on key pandemic containment strategies to conference or roundtable-style discussions among experts, policy-makers, and other stakeholders. Here, we present findings according to our primary study objective by first illustrating significant themes pertaining to limitations of COVID-19-related evidence. Following this, we highlight difficulties interviewees mentioned with regards to integrating evidence into pandemic-related decision-making. Finally, we share examples of how interviewees reported overcoming challenges associated with available COVID-19 evidence and integrating evidence into COVID-19 policy before moving into our discussion and conclusions. Towards the end of each section, we present quotes exemplifying these key themes. Unless otherwise noted, the following themes spanned across interviewees from both HICs and LMICs.
Challenges associated with available evidence
Nearly three-quarters of respondents noted concerns regarding the evolution and influx of new evidence throughout the pandemic, which was often overwhelming and difficult to keep up with (particularly for some interviewees from LMICs), conflicting or lacking consensus, or inadequate to inform in-country COVID-19 measures. Specifically, they referenced examples concerning the dynamic nature of the pandemic itself and how evidence had to keep up with the changing virus and pandemic conditions, conflicting or inadequate evidence relating to COVID-19 treatment (eg, hydroxychloroquine), scientific contention regarding mask use and ventilation as protective measures, as well as evolving evidence regarding vaccines and vaccine efficacy. An interviewee based out of the UK used the example of face coverings to highlight challenges with the amount of evidence that they had to distill in order to advise decision-making:
I think probably the best [example] is face coverings, where there has been different interpretations of the evidence … and you can interpret those strands of evidence differently depending on your view, and I think that has caused some controversy over what is the value of face coverings, because you can basically pick your evidence base and make a completely different argument about whether they are very effective, partially effective or not effective, and that has been a problem (UK2).
The speed at which new evidence was coming out about the SARS-CoV-2 virus made it onerous for experts to rapidly collate, assess, and synthesise evidence to inform decision-making. An interviewee based out of Sweden (SE2) described how this made it difficult ‘to be evidence based’ given that we are dealing with a new disease where evidence, and thus responses, develop over time.
In part related to the previous challenge, about a third of interviewees described issues and concerns pertaining to the scientific integrity of available evidence, a lack of rigour associated with available evidence, and misinterpretation and misapplication of evidence. One Canadian expert summarised these challenges, arguing:
I just think the limitations are really the lack of scientific rigor and the inability of many evaluators—and I use that term loosely—to assess the quality of that evidence. And now in the 21st century, with everybody having access to everything and everybody having access to a platform, it really confused people. And going back to the trusted source: who do you trust? … So the barriers, I think, are a lack of ability [to assess] the quality of the evidence. (CA2)
Many reported that these issues were compounded by insufficient time to properly review or interpret findings before decisions had to be made. As an expert from Ghana (GH1) explained, ‘[t]ypically before an intervention, [we) would have conducted pilot tests and an evaluation of these tests, and engaged stakeholders. However, these steps were essentially skipped or quickly done.’ Interviewees also shared examples of how, especially early in the pandemic, there was a dearth of quality evidence, and noted how in some instances they believe political influences compromised the integrity of evidence. For example, an interviewee based out of Uganda shared that:
I thought the evidence we were generating from the cases reported was not accurate, and of course, somebody from the ministry, from the government may not give you that information. For the most part, as a researcher, we thought the reporting was not accurate and therefore certain decisions will have been made based on probably wrong data. (UG1)
Other difficulties with COVID-19 evidence related to issues of country level or public health agency capacity within countries to evaluate and produce evidence, including having to adapt evidence from other geographic, political and cultural contexts. The roughly one-third of interviewees that shared these constraints primarily represent LMICs (Colombia, Ghana, Nigeria and Uganda), thus emphasising disparity in terms of research capacity to rapidly evaluate evidence for appropriate COVID-19 response and an initial reliance on research and evidence that may not be reflective of in-country circumstances. Below we provide two quotes, one from an expert in Nigeria and another from Colombia, that illustrate these context-specific challenges:
The literature [from other contexts] helped to sort of create a sort of framework. But at the end of the day, that just sort of gives you an idea of where things are going. you still have to contextualize it … Sometimes, we look at things that have been evaluated because the perception from a high-income country where you don’t necessarily have the density of people that we have or the level of poverty—the decisions you will take and what you would consider the best option, we would have to reevaluate that because it may not work here at all. (NG1)
[This guidance/measure is] not going to help because the context is different … We need to concentrate on what are differences among the little neighborhoods here … We need to understand what is happening inside our territory, not compared to us… Even if you compare Colombia to the US or Italy, it doesn’t make sense. It’s so different. But that was very difficult to… comparing to others seems to be very important, which I saw, it was pointless. (CO2)
Limited testing capacity to produce observational data and inform benchmark measures, preparation for and capacity to conduct research locally or resources to assess evidence generated internationally posed significant challenges for these individuals and countries. Notably, the aforementioned challenges associated with available evidence were particularly pronounced in the early stages of the pandemic, but nonetheless reflect constraints that experts navigated before bringing forward evidence to decision-makers.
Challenges integrating evidence into pandemic decision-making
Inter-related with issues concerning available evidence were challenges associated with integrating COVID-19 evidence into decision-making. A commonly referenced theme by more than half of interviewees, though not necessarily unique to pandemic EIDM, was the acknowledgement of the multiple forms of evidence and perspectives that decision-makers must take into consideration before making policy decisions. Not all interviewees mentioned this explicitly as a challenge, but rather as the inevitable nature of EIDM. However, in a pandemic context that is quickly evolving, interviewees reported that there was sometimes contention or a lack of clarity about what types of evidence were prioritised for informing COVID-19 measures. As one expert based out of Colombia explained:
I am not the only person giving advice or giving recommendations. So sometimes other researchers or other epidemiologists would give a different recommendation. And the decision-makers may think that that’s the evidence they need. it’s a very complex thing. (CO1)
Our interviewees recognised that their evidence-based recommendation is often only a (small) part of the policy-making equation. Local political considerations, capacity to adopt interventions, and cultural values were described core to the choices taken, in addition to the sometimes competing perspectives of stakeholders, experts and technical advisers from other disciplines. These included, for instance, socioeconomic factors such as food access and security and the costs of keeping children out of school (eg, impacts to developmental progress and delays in achieving educational competencies). An expert out of Hong Kong characterised the decision-making process as a ‘very political situation’ and that ‘there is political input that [decision-makers] have to worry about. The business situation that they have to worry about, local public concern. anger, distrust. All of these things are there.’ Our interviewees went on to highlight the need for multiple perspectives to explore unintended impacts and more innovative options, recognising their own personal and disciplinary blind spots.
This characteristic of EIDM is exacerbated in a pandemic context on two fronts: the difficulty with rapidly vetting, producing, and synthesising multiple forms of evidence for decision-makers and the fact that decision-makers have to make decisions quickly—often with incomplete, conflicting, or unavailable evidence. This was particularly pronounced early on in the COVID-19 pandemic. Not only do decisions need to be made swiftly, but comprehensively—balancing the interests, findings and realities of multiple, at times conflicting perspectives (eg, negotiating economic, public health, sociocultural considerations). A Denmark-based advisor described the difficulties with having to make decisions quickly and under conditions of uncertainty:
[T]here was so much pressure to make decisions under intense time constraints. So it was ‘better than nothing,’ I think, was sometimes the phrase that I heard. So this is as good as we’ve got at the moment, so we will go with this. So it was that—I mean, pragmatic’s not the right word because it was much more than pragmatic and potentially ill-conceived and inaccurate. But yeah, that as, I think, also a feature of the pressure that people were feeling that they just needed to come up with something and present that to the chief health officer. (DE1)
Various other factors mediated the implementation of COVID-19-related measures, including public acceptance of policy, capacity to implement (eg, financially such as measures that require widespread testing capacity), having the political will necessary to implement measures, and the practicality of certain measures that involve weighing the costs and benefits of intended and anticipated outcomes (eg, of school closures). EIDM processes cannot be appropriately understood outside of sociopolitical contexts and considerations such as these that influence COVID-19 decision-making is not always a clear, linear process. Finally, more than a third of participants mentioned either a lack of clarity, transparency, or existence of EIDM processes or decision-making structures to inform COVID-19 policy. A US-based advisor described the process as somewhat typical of their interactions with government decision-makers, explaining:
That’s something I observed in a lot of interactions with government decision-makers, is that they ask what you think, and then they do something and they never tell you how, if at all, what you told them influenced or didn’t influence their decisions or what else influenced them. They don’t even tell you the decisions. They say, ‘Thank you very much,’ and you’re done, and you have to reconstruct it yourself. (US3)
While this type of challenge or frustration of EIDM is not necessarily specific to the pandemic context, it nonetheless posed additional challenges, particularly in politically charged environments with a growing populace increasingly sceptical of science. Challenges associated with transparency were also not limited to decision-making structures within national or subnational contexts, as some interviewees from LMICs called on HICs to be more transparent when it comes to data availability and information sharing that could aid in decision making. In addition, one interviewee expressed frustration as to how unclear they believed the process behind informing WHO recommendations to be.
Several experts represented in this study provided examples of how they tried to address or overcome the aforementioned challenges. These included, for instance, reliance on past experiences with H1N1, SARS and Ebola, recognition that there are multiple forms of evidence and perspectives that must be taken into account in EIDM, and that you cannot always delay decisions waiting for enough evidence to validate these decisions, especially in a pandemic context. As one interviewee based out of Canada explained, ‘you can’t be a purist here as you may have established pragmatism, including understanding what is politically feasible and what is not. The perfect is the enemy of the good.’ Others referenced taking precautionary approaches before data and evidence became available to make decisions (e.g., before widespread consensus on mask use, putting mask ordinances in place, setting lockdown ordinances). Relatedly, interviewees shared what they believe ‘works’ in terms of improving pandemic EIDM, such as having transparency and accountability for EIDM (e.g., who ultimately makes policy decisions, how evidence is weighed), the production and use of rapidly synthesised literature reviews and briefs for informing decisions, the importance of modelling and observational data for establishing decision-making thresholds (though not without acknowledging key limitations relating to capacity to produce such data and including appropriate indicators), and the need for and recognition of multiple perspectives in pandemic EIDM—with some noting explicitly the need for social and behavioural science perspectives.
Discussion
Scientific advisors faced multiple constraints and challenges surrounding available COVID-19 evidence and informing EIDM, which included: the speed and influx of new, evolving, and at times conflicting evidence coming in; concerns about scientific integrity and misinterpretation of evidence; the limited in-country capacity to assess and produce evidence, as well as limitations of adapting evidence from other contexts; navigating multiple forms of evidence and perspectives needed to inform decisions with multi-sectoral impacts; having to provide recommendations quickly and under conditions of uncertainty; and a lack of transparency surrounding the process for how decisions are ultimately made. These findings affirm challenges to COVID-19 EIDM noted elsewhere17 18; and also shed new light on the types of overarching challenges specific to evidence interpretation and translation affecting the global scientific and advisory community that are specific to, or amplified by, pandemic contexts.
Our interviewees, particularly those from LMICs, emphasised challenges associated with accessing and assessing evidence itself, the foundation of EIDM. An emphasis on science and evidence translation, while important, should shift to include how experts process, understand and synthesise evidence for decision-making in a high uncertainty and evolving public health policy environment.6 19 20 For example, interviewees expressed challenges with conflicting evidence, including inability to compare or weight studies that came to disparate conclusions using incongruent approaches or in different contexts, that lacked scientific rigour or integrity, or that were released without standard quality control processes (eg, peer review). While the scientific community responded to the COVID-19 pandemic with unprecedented vigour and speed,21 22 our findings indicate that volume came at the expense of trustworthiness and interpretability, and points to a critical need to rethink how funders, multilateral organisations, governments and scientific organisations prioritise, fund, coordinate and communicate science in the context of pandemics and public health emergencies. Specifically, evaluation of the WHO R&D Blueprint, a global effort designed to enable rapid research and development activities in the context of epidemics, including its impacts on global pandemic response equity, should be prioritised. This Blueprint, for example, could more explicitly focus on generating evidence relevant to LMIC contexts and supporting research capacity development within LMICs, including through investment in local scientific infrastructure and investigators and knowledge translation efforts—for both policy-makers and the general public. Further, it is necessary to assess activities as part of the Blueprint effort to ensure that it is having an equitable impact on research capacity building across both HICs and LMICs.
Interviewees also expressed frustrations regarding lack of understanding of the policy-making process, and transparency regarding if and how evidence and/or advice was ultimately incorporated into policy. Their sentiments echo that of recent research concerning COVID-19 EIDM and prior research following H1N1, where rapidly developed scientific advisory committees had little time to establish such credibility, and scientific advisors were left with unanswered questions about how advice was used in recommendations ultimately generated.13 18 Yet, in non-pandemic contexts, policy scholars and practitioners alike have long recognised the competing priorities and influences of the policy-making process.11 23. As policy-makers are constantly synthesising information—including scientific evidence—alongside intangible and individualised values, policy-making is inherently opaque. Thus, a transparent or purely evidence-based process would require removing integration of values in decision-making altogether. Education of scientists and policy-makers about each other’s needs, perspectives and constraints to EIDM,24 including the characteristics of successful actors in the policy-making process, may thus be a more realistic and fruitful approach to promoting prioritisation of scientific evidence among the multiple inputs considered by decision-makers. Although the current study focuses on the experiences and perspectives of advisors informing the EIDM process, we recognise that a lack of understanding may exist on behalf of policy-makers regarding the research and evidence generation process.25 All actors involved in EIDM must be considered in order to have a fuller understanding of the constraints facing pandemic EIDM and opportunities for capacity-building and mutual learning.
At the same time, global investments in local scientific capacity, particularly in LMICs, is essential to a fair and equitable global pandemic response. Some interviewees, particularly those from LMICs, described a lack of research capacity in their home countries as limiting factors that hinder development of feasible policy options. Early in the COVID-19 pandemic, a majority of evidence evaluating response and recovery interventions came from HICs and well-resourced healthcare settings,26 27 while the burden of COVID-19 in LMICs is only expected to grow.28 Evidence on NPIs, hospital-based care and surveillance practices implemented in high-resource settings may not be relevant to low-resource settings given the array of economic, demographic, social and cultural differences.29 30 This further speaks to the necessity for additional research focused in lower-resource settings that involves local investigators as well as the need to address disparities regarding equitable research capacity for developing evidence to inform decisions and equitable access to evidence so that LMICs have the resources necessary to respond effectively to pandemics. Such local capacity must complement—not be replaced by—multilateral efforts to develop centralised pandemic surveillance infrastructure (e.g., the recently announced Hub for Pandemic and Epidemic Intelligence in Berlin, supported by the WHO and German government). Moreover, coordinated investments to engage multiple disciplines in pandemic scholarship—including the social sciences—is essential to understand and address the enormous societal impacts of pandemics.
As pandemic decisions necessitate action amid scientific uncertainty and emerging evidence,4 13 pandemic-specific guidance to consider trade-offs and integration of multiple perspectives will be necessary to avoid creation and perpetuation of inequitable consequences.31–33 This includes, for instance, guidance around management of pandemic EIDM processes such as convening a diverse group of experts and perspectives, open tools and responsible data sharing across in-country agencies and between countries, and processes for rapidly reviewing and compiling research briefs to inform decision-makers. Recognising that countries and jurisdictions approach COVID-19 EIDM differently (e.g., some with pre-existing public health structures in place, newly created structures and variance in terms of top-down vs bottom-up approaches), this signals a need for well-established pandemic science advice infrastructure (e.g., formal and interdisciplinary convening bodies) at national, regional and international levels in contexts where they do not currently exist.
Systems that make evidence rapidly accessible yet rigorously reviewed, including coordinated research efforts across countries as well as decision-support frameworks, can enhance our global capacity to respond to the next pandemic.32 34 35 For example, rapid evidence assessment panels supported by individual governments and multilateral organisations were described as important resources by interviewees. At the same time, interviewees called for transparency among countries to learn from one another rather than operating individually, which may have resulted in unrecognised redundancies and duplicated efforts. Moreover, given the rapid scientific response witnessed during the COVID-19 pandemic, decision-making bodies must also be agile (and appropriately resourced), to change course as necessary as new evidence emerges. As such, the findings presented here signal an urgent need for global EIDM guidance that countries can use to inform nimble systems and infrastructure that provide capacity for prepandemic, during and postpandemic decision-making. A global approach to address the challenges we identified (summarised in table 2, with recommendations to address these challenges) can encourage and inform national-level and multilateral investments necessary to create a more formalised and coordinated global approach that diminishes silos and identifies processes that can be standardised to facilitate global EIDM harmonisation. Grounded in stakeholder input and oriented towards feasibility and accessibility, such an approach can deliberately highlight and propose solutions to address inequities in capacity and capability in pandemic EIDM and resultant disparities in pandemic outcomes within and across countries. The WHO, through its R&D Blueprint, is well positioned to support and coordinate such an effort. Further, this study highlights the need for intrapandemic research investment, including linked networks of advisors for supporting decision-making.
Limitations
We drew on the perspectives of 27 scientific experts providing technical advice in 11 countries across four WHO regions, with the goal of both breadth (geographical diversity and inclusion of HIC and LMICs) and depth (multiple informants in ‘case’ countries). However, we recognise that our results do not reflect a global perspective nor the perspectives of policy-makers, who are critical actors in EIDM processes.36–38 While a strength of our research is that it minimised recall bias through capture of real-time perspectives, this approach precluded integration of policy-maker perspectives. Learning from the experience of coauthors that, around the same time, had interview requests declined by policy-makers in multiple countries due to competing priorities and time constraints, we designed our study with a focus on advisors to promote feasibility. We recognise this focus on policy-makers as an important area for future inquiry. Moreover, our sampling approach limited our ability to deeply account for contextual nuances based on geography, infrastructure and other circumstances. In addition, our cross-sectional interviews, conducted in late 2020 and early 2021, reflect scientific advisers’ experience at a particular stage of the pandemic, which also varied across countries. Yet, they provide critical insights into decision-making in the early phases of a pandemic, which is arguably where the most evidence and resource constraints will exist. Importantly, we also recognise the limitation we have as an author team in that none of us represent an LMIC context (although we do span considerable geographical areas). Our positionalities undoubtedly influenced who we were able to access for interviews, especially since we were only able to conduct interviews in English, and how we were positioned to interpret the data from contexts other than our own. In an effort to address this limitation, we integrated credibility checks of our interpretations (member checking), an integral part of robust and trustworthy qualitative research.39
Conclusions
COVID-19 response across the world has shown us that a pandemic context requires both robust research skills and capacity for rigorously and rapidly conducting research, assessing evidence and informing policy, as well as decision-making structures cognizant of and readily adaptive to emerging findings (generated domestically or abroad). While competing inputs and value systems may inhibit any decision from being purely ‘evidence-based,’ maximising the impact of evidence in pandemic decision-making globally requires clear guidance and approaches for rapidly assessing and synthesising evidence for integration into policy, including appropriate representation of experts and stakeholders within EIDM processes, and efforts to foster a sociopolitical environment that is receptive to and supportive of EIDM. This has become increasingly important as we witness growing contention, politicisation, and fatigue around COVID-19 response. Additional research areas that arise from this study include: the need for attention as to how science advisory bodies, particularly multistakeholder groups, are created, structured and integrated into the pandemic EIDM process; how various forms of evidence are weighed in decision making in a pandemic context; the role of science in a fair and equitable pandemic response; and the implementation and effectiveness of solutions to reduce ‘evidence inequities,’ including disparities in access, generation and interpretation capacity, for a more just global response to future pandemics and public health emergencies.
Data availability statement
No data are available. Data are unavailable based on approval conditions of ethical boards.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants but University of Washington Human Subjects Division (IRB ID: STUDY00010693) exempted this study.
Acknowledgments
Juliette Randazza, Rachel Wittenaur, and the WHO COVID-19 Social Sciences and Ethics Research Roadmap Workgroups.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Handling editor Seye Abimbola
Twitter @jamievickery
Contributors JV: guarantor, data curation, formal analysis, investigation, methodology, project administration, writing—original draft and revised draft; PA: conceptualisation, funding acquisition, investigation, methodology; LL: conceptualisation, funding acquisition, methodology; OR: conceptualisation, funding acquisition, methodology; E-KY: conceptualisation, funding acquisition, investigation, methodology; CB: data curation, formal analysis, investigation; NAE: guarantor, conceptualisation, funding acquisition, investigation, methodology, project administration, supervision, writing—original draft and revised draft. All authors critically revised the manuscript and approved the final version submitted. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.
Funding This publication has been supported by German Federal Ministry of Health (BMG) COVID-19 Research and Development funding to WHO. Study sponsor provided funding support for data collection, analysis, interpretation of data and publication.
Disclaimer The sponsor did not contribute directly to these activities.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.