Article Text

‘The response is like a big ship’: community feedback as a case study of evidence uptake and use in the 2018–2020 Ebola epidemic in the Democratic Republic of the Congo
  1. Gillian McKay1,
  2. Ombretta Baggio2,
  3. Cheick Abdoulaye Camara3,
  4. Eva Erlach2,
  5. Lucia Robles Dios4,
  6. Francesco Checchi5,
  7. Hana Rohan6
  1. 1Department of Global Health and Development, The London School of Hygiene and Tropical Medicine, London, UK
  2. 2The International Federation of Red Cross and Red Crescent Societies, Geneva, Switzerland
  3. 3International Federation of Red Cross and Red Crescent Societies, Conakry, Guinea
  4. 4The International Federation of the Red Cross and Red Crescent Societies, Goma, Democratic Republic of the Congo
  5. 5Department of Infectious Disease Epidemiology, The London School of Hygiene and Tropical Medicine, London, UK
  6. 6UK Public Health Rapid Support Team, The London School of Hygiene and Tropical Medicine, London, UK
  1. Correspondence to Gillian McKay; gillian.mckay{at}lshtm.ac.uk

Abstract

Introduction The 2018–2020 Ebola outbreak in the Democratic Republic of the Congo (DRC) took place in the highly complex protracted crisis regions of North Kivu and Ituri. The Red Cross developed a community feedback (CF) data collection process through the work of hundreds of Red Cross personnel, who gathered unprompted feedback in order to inform the response coordination mechanism and decision-making.

Aim To understand how a new CF system was used to make operational and strategic decisions by Ebola response leadership.

Methods Qualitative data collection in November 2019 in Goma and Beni (DRC), including document review, observation of meetings and CF activities, key informant interviews and focus group discussions.

Findings The credibility and use of different evidence types was affected by the experiential and academic backgrounds of the consumers of that evidence. Ebola response decision-makers were often medics or epidemiologists who tended to view quantitative evidence as having more rigour than qualitative evidence. The process of taking in and using evidence in the Ebola response was affected by decision-makers’ bandwidth to parse large volumes of data coming from a range of different sources. The operationalisation of those data into decisions was hampered by the size of the response and an associated reduction in agility to new evidence.

Conclusion CF data collection has both instrumental and intrinsic value for outbreak response and should be normalised as a critical data stream; however, a failure to act on those data can further frustrate communities.

  • health policy
  • epidemiology
  • viral haemorrhagic fevers
  • qualitative study

Data availability statement

No data are available. Due to the sensitive nature of qualitative research and difficulty anonymising participants, these data are not publicly available.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key questions

What is already known?

  • Decision-makers in outbreaks are besieged by data from many sources and find it challenging to integrate evidence given many competing priorities.

What are the new findings?

  • The Red Cross’ community feedback (CF) system provides a lens by which to look into how new forms of evidence (particularly qualitative evidence) was taken up and integrated into the North-Kivu Ebola response.

  • Decision-makers largely had medical or epidemiological backgrounds, and tended to prefer quantitative evidence types, therefore qualitative evidence had to be presented in a ‘quantified’ way to be taken in by this audience.

  • Evidence-based policy and practice change in the Ebola response was hampered by the geographic scale and large numbers of responding actors resulting in an insufficiently nimble response, frustrating communities who were providing feedback.

What do the new findings imply?

  • CF systems like that of the Red Cross’ are an important mechanism to gather and present community views to decision making bodies in the midst of a public health crisis, and should be rolled out for future outbreaks.

Introduction

The 10th known Ebola outbreak in the Democratic Republic of the Congo (DRC) was announced on 1 August 2018, and declared over nearly 2 years later on 25 June 2020 with a total of 3470 cases and 2287 deaths.1 The outbreak took place in a highly complex environment, including active conflict, displaced populations, inaccessible terrain and porous borders. It was characterised by unprecedented violence against staff and assets involved in the Ebola response and consequently a marked securitisation of response operations.2 3

The way that decisions are made in the midst of infectious disease outbreaks has been studied in both high-income and low-income settings.4–6 A recent scoping review from a variety of infectious disease outbreaks in high-income, middle-income and low-income countries, including from the West African Ebola outbreak6 found that decision-makers are challenged by multiple competing priorities, struggle with uncertainties and different interpretations of evidence, and often prioritise quantitative (epidemiological and mathematical modelling) evidence types to make their decisions. To challenge this epistemic hierarchy, in this study, the authors used Rycroft-Malone et al’s definition of evidence-based practice in healthcare, which ‘does not presuppose the value of a particular evidence source or study design over another, but instead highlights the importance of ensuring that the evidence used to inform practice (and policy) has been subject to scrutiny’.7

One important domain of evidence for responding appropriately to epidemics, involves the collection and use of community feedback (CF) to identify community concerns and incorporate these into decision-making. While the goal is to improve interventions and help ensure accountability to local populations, these aims are not always achieved.8 Traditionally, feedback mechanisms have included feedback boxes, help desks and community meetings.9 In one example of its importance to humanitarian effort, CF makes up two (commitments four and five) of the nine commitments of the Core Humanitarian Standard, which humanitarian response agencies can commit to in order to improve accountability to affected populations.10

In 2018, as part of the DRC Ebola response, the DRC Red Cross Society and International Federation of the Red Cross and Red Crescent Societies (IFRC), in collaboration with the US Centers for Disease Control and Prevention (CDC), set up a programme to routinely and systematically gather CF through its network of Red Cross community volunteers.11 This feedback was intended to be used: (1) to help the Red Cross better understand community concerns to guide their internal weekly planning and (2) by the wider Ebola response coordination and decision-making bodies to ensure that community perspectives, perceptions and disease understandings were at the centre of epidemic response strategies, a major recommendation arising from lessons learnt during the West African Ebola epidemic.12 13 Additional detail about the system itself is described elsewhere.14–16

The structure of the DRC Ebola response changed over time, but remained organised around technical ‘commissions’ or pillars (see figure 1), based on the WHO’s Incident Management System.17 These pillars were largely the same at all levels of the DRC Ebola response. For a comprehensive review of coordination of the Ebola response over time, see the Humanitarian Policy group’s report on the 10th DRC outbreak.18

Figure 1

Ebola response pillar structure (simplified).

Decision-making in the Ebola response was somewhat decentralised, with coordination hubs at multiple geographic levels and operational coordination at aggregated health zone levels and at sub-coordination levels (see figure 2). At all levels, decision-making was led by Ministry of Health (MoH) staff, with technical support and advice from UN Agencies, the IFRC, DRC Red Cross, donors and non-governmental organisation (NGO) partners. Decision-makers were faced with an often-overwhelming volume of data to sift through and prioritise to make strategic and operational plans. At the strategic level (in Goma), decision-makers also had to balance the immediate priorities of outbreak control with other more distal challenges, including security and access, the economic fallout of disease spread, political pressures to bring the outbreak under control and the wider humanitarian needs of the local population.

Figure 2

Ebola response coordination levels (simplified).

In this paper, we present a summary of the way that CF evidence was taken up by the Ebola response, grounded in data from interviews conducted with a cross-section of Ebola response staff and volunteers. These results highlight the highly complex nature of this particular outbreak response context. We do not attempt to provide a comprehensive account of evidence use in outbreaks, as that is beyond the scope of this paper.

Methods

Red Cross CF system

The Red Cross CF data collection and analysis process started with having Red Cross volunteers note down unstructured feedback from community members in the course of their daily work engaging with communities. While the term ‘community’ is a contested term, in this research study we interpreted community in the same way as the Red Cross CF system to ensure consistency of interpretation across both intervention and evaluation.19 This feedback was classified by the field volunteers on a collection form: (1) questions; (2) statement (rumour, belief, observation); (3) suggestion/request; (4) sensitive or violence related; (5) appreciation; (6) other (refused dialogue). The feedback was then passed to local field teams so that the classification could be validated for subsequent data entry. Following this, feedback was then coded thematically with more granularity by IFRC staff and Red Cross volunteers, and then quality checked by US CDC staff. Any discrepancies or coding scheme adjustments were reviewed in weekly teleconference calls between the partners. In the early days of the intervention the CDC provided substantial support on coding, but this skill was then transferred to the local level once the project was more established; CDC then took on more of a quality assurance role.

The granular themes applied by these teams were developed using an iterative approach over many months of data collection and analysis. The analytical process was detailed and was enhanced through the involvement of multiple teams ostensibly acting as multiple coders validating each other’s work. This work of data collection, coding and thematic analysis allowed for the creation of weekly briefs by geographical zone, deep-dive briefs, trend analyses and specialised presentations for field-level and strategic decision-making. A dashboard of the CF (coded) was also made available to all response partners.16

At the time of data collection, CF was collected and analysed under the Risk Communications and Community Engagement (RCCE) pillar, one of several technical response pillars (figure 1). As part of this Red Cross initiative, CF meetings were established where organisations contributing CF could present their latest community collected information for discussion and analysis (see figure 3), with escalation to decision-makers as needed.

Figure 3

Community feedback information flows (simplified).

The Red Cross CF system is, to our knowledge, unprecedented in scope and breadth, generating (between August 2018 and June 2020) approximately 300 000 individual verbatim records of feedback received by over 800 Red Cross volunteers during their routine fieldwork in 29 health zones. In a separate paper, we will analyse patterns in the Red Cross CF across time and by stage of the epidemic, and evaluate its potential accuracy for providing early warning of attacks against Ebola responders, a common feature of the Eastern DRC epidemic. Here, we examine qualitatively the utility of the Red Cross CF system for decision-making, and, more broadly, how CF evidence was used during the response to inform strategy.

The study encompasses the epidemic period up to October 2019. Data collection took place in two locations: (1) Goma, the capital of North-Kivu with a population of approximately 630 000, is a large city on Lake Kivu with an international airport and easy access via land border to Rwanda. Goma was not heavily affected by local Ebola cases but was established as the coordination hub for the response; and (2) Beni: a city of approximately 230 000 people, the main city of the ‘Grand-Nord’ region of North Kivu, an early epicentre of the Ebola outbreak and a frequent site of militia attacks on Government, UN forces and civilian populations.

Study methods

This study used qualitative data collection methods, including document review, CF collection observation, meeting observation, key informant interviews (KIIs) and focus group discussions (FGDs). Data collection for this study was conducted in November 2019, with 17 KIIs and 1 FGD conducted in Goma, 13 KIIs and 2 FGDs in Beni in the DRC, and one additional interview conducted remotely from London, UK. The lead researcher (GM) is a nurse and was herself previously deployed to the DRC for the Ebola response, though working for different response pillars and with no interaction with the Red Cross CF system.

Documents reviewed included policies and strategies relevant to CF and Safe and Dignified Burials (SDB), along with the multi-sectoral strategic Ebola response plans that encompass all components of the Ebola response (see list of coordination meetings and key documents in online supplemental appendix A). Observations of eight meetings took place, including general coordination, CF and community engagement, as well as internal Red Cross meetings. Meetings were not recorded, but detailed written notes about meeting processes and engagement with CF were taken by the researcher.

Supplemental material

A total of 30 KIIs were conducted with staff from the national Red Cross, the IFRC, the MoH, the Ebola response coordination, NGOs, UN Agencies and funding bodies (see online supplemental appendix B). The interviews used a semi-structured interview guide that was iterated over time as new findings and themes emerged (see online supplemental appendix C for topic guides). Three FGDs were held with Red Cross volunteers, one with community engagement personnel and two with SDB personnel (see online supplemental appendix D for topic guides). All interviews and FGDs were recorded using an encrypted audio recorder, and were then translated (where French was the language of the interview) and transcribed by a professional agency. Observation of the Red Cross CF system took place through a field visit with CF teams collecting data, and during CF analysis meetings and coordination meetings where the feedback was discussed.

All data were analysed in Nvivo using a thematic analysis framework approach,20 where codes derived from the interview topic guide were assigned to lines of text in a small sample of interviews. Following review of the initial coding by two experienced social scientists (GM and HR), a working analytical framework made up of codes and categories was applied to the remaining transcripts and field notes, while allowing for novel concepts in later transcripts to be coded and categorised. After completion of all coding, the two main authors identified the key categories and further developed them to form the basis of the results section of this paper.

It was not possible or appropriate to involve patients or the public in the design, conduct or reporting of our research. We do intend to involve the public (defined in our case as outbreak and humanitarian actors in the DRC) in the dissemination phase, but this has been put on hold due to the COVID-19 pandemic.

Results

We describe, in order, the process by which CF was produced, the extent to which it was valued by the Ebola response, its uptake for decision-making and its operationalisation as concrete changes to interventions or strategy. These over-arching themes were prioritised as they offer opportunities to highlight barriers and facilitators to the CF process and use, so that recommendations could be generated for its use in future outbreaks.

Production of evidence

Observations showed how the raw feedback was analysed and developed into recommendations for the various consumers of the data, including Ebola response leadership (MoH, WHO and UNICEF), response partners (including DRC Red Cross, NGOs and additional UN agencies) at multiple levels of the response.

Production of CF

Observations of the CF data collection and analysis process found the fieldwork component to be adhering well to the written operating protocols, which had been changed and adapted over time as the CF mechanism and the Ebola response evolved over the course of the 17 months of the outbreak. Some key changes that had been made to the system included fieldworkers taking on analysis of feedback at their administrative level (to increase the system’s timeliness and sustainability for localised action, and to ensure geographical nuances weren’t lost), the identification of new thematic codes in response to changes in key operational priorities (ie, perceptions of Ebola survivors) and the institutionalisation of CF meetings with members of the various pillars of the Ebola response to jointly develop recommendations that were then ‘owned’ by the pillar leads for implementation.

The IFRC and DRC Red Cross were not the only group engaged in CF data collection. Other NGOs had different but complementary methodologies, and all formally collected CF was fed into the RCCE pillar. However, proposed approaches to aggregate CF collected via different organisations and methodologies were not welcomed by all actors. Reasons for this were that different approaches to data collection were not felt to be equivalent in terms of the rigour of field worker training and feedback analysis, nor in terms of the geographic coverage or quantity of feedback collected. Organisations were often protective of their own CF approach, and wanted to ensure that ‘their’ feedback was presented as coming from their organisation, perhaps to demonstrate their value to the Ebola response.

Branding of evidence

The Red Cross, while respected for their role in SDB and in community engagement, were initially not perceived to be a data generating organisation by the RCCE pillar of the Ebola response, and as a result, some of the evidence that they were trying to bring to decision-makers was not initially trusted or welcomed by the MoH-led Ebola response coordination.

… people don’t see IFRC as a data organisation, they’re a service organisation, they’re a volunteer organisation … So I think the fact that it’s branded as IFRC data actually affected it and I think once people became familiar that CDC was doing the analysis and were part of the analytic process … and really meeting with the [RCCE pillar] and finally getting [CF] on the agenda of a [RCCE pillar meeting]. (technical advisor, Goma)

The reputation of the CDC as being analytically skilled supported the Red Cross and the IFRC to further develop their CF system and imbued Red Cross with the data legitimacy they needed to highlight CF data as critical evidence for decision-making in the Ebola response.

Another challenge with how CF was ‘branded’ was that it was often thought to only have relevance for the RCCE pillar, with the consequence that other pillars (including Infection Prevention and Control, Vaccination, and Case Management) did not always see the applicability of the feedback to their own operations. As stated by one Goma-based individual closely involved in the CF process:

… I think one of the biggest challenges is just that this feedback mechanism is then associated with the [RCCE] pillar, this is like touchy-feely stuff that people don’t care [about]. (community engagement specialist, Goma)

Some respondents also reported that the CF data were sometimes perceived negatively by specific Ebola response pillars:

…they see it as an accusation … Because if we really wanted to manage [CF], or implement recommendations deriving from community feedback, everyone would have to know and accept that there’s a problem, and this is the solution. (RCCE specialist, Beni)

In these ways, both the branding or positionality of the organisation collecting data, and of the value of the data itself, affected perceptions of the Red Cross CF system, and therefore its ability to influence decision-making.

Value of evidence

How different types of evidence were perceived in the Ebola response was found to be dependent on the experiential and academic backgrounds of the consumers of that evidence.

Hierarchies and cultures of evidence

KIIs identified that different types of evidence were viewed as more or less valuable, usable and valid than others in the decision-making for the Ebola response. Often this was perceived to be linked to Ebola response leadership who were predominantly clinicians or epidemiologists for whom quantitative data were seen to be at the top of a hierarchy of evidence types.

… whether or not we want it or say it, epidemiological data are used much more than social sciences or qualitative data. … we chase numbers when we say “There’s one confirmed case”, or when we say “There are ten confirmed cases”. … Certainly, qualitative data are used, but not as much as the quantitative data … when there is a confirmed case, or this and that happens, that sets a lot of things in motion. But you can be sure that qualitative data are also used every day through the community feedback escalated by the [RCCE pillar]. (area coordinator, Beni).

This quote illustrates the hierarchical approach to evidence types within the response, as well as the perception that qualitative data was not seen to reliably portray the magnitude of the problem or issue that it described. When feedback was presented in a quantitative format (eg, by tabulating the frequency of certain feedback themes), it was better received by the Ebola response coordination leads, as opposed to when quotes from feedback were presented:

… because they’re scientists. They need advanced analyses with probable results. When you present [CF] data for the sake of it … But when you present data that seems to imply an advanced analysis, the work is taken into account. If you say, “The community says this or that” and you stop at that, it’s a bit complicated. But we present recommendations backed by what the community has said, and we consider the rate of repetitions, so it seems more scientific to the people involved in this response. It gets more attention. (RCCE coordinator, Beni)

Respondents felt that a purely qualitative approach to the presentation of feedback findings might be perceived to contain bias and was a confusing way of presenting findings for an audience that had largely been trained in quantitative disciplines. Taking a quantitative lens to qualitative data is not a gold standard approach to the presentation of this type of evidence, and so it took several iterations of presentation formats before a compromise between qualitative and quantitative presentation was reached.

The method by which CF was collected also led some quantitatively trained response staff to dispute how robust the data were; the feedback was not representative of the community, given that it had not been collected using a random sample.

I think that community feedback … you can’t go and see the same person every day for feedback. They’re going to repeat the same thing. Or you can’t go and see just one category of people … In my opinion, community feedback needs to establish a randomised selection system for people that give their opinion. For example, we know what Beni’s population is, we know how many homes there are here, and we know how to elaborate a stratified, random sample … in a sequential manner … so that community feedback represents community opinion. And think about how different groups can be included in that. (area coordinator, Beni)

This epistemological difference between the perceived value of qualitative and quantitative data was felt by some research participants to be emblematic of the power differentials between the Ebola response leadership (who were nearly always clinical and/or epidemiologists) and those with ‘softer’ social science skills. Some respondents felt that due to outbreaks nearly always being led by clinicians or epidemiologists, the Ebola response inherently became too narrowly focused on biomedical interventions, even in the midst of a highly complex humanitarian context like North-Kivu and Ituri. As one communications expert stated:

The response, despite it being a health response, I personally think that … it wouldn’t be adequate to let the WHO direct a health response. I’d rather we chose someone from the social sciences, someone with another profile, to direct a response like this one, a response to an epidemic. Because when it’s directed by someone from social sciences, he could tell the epidemiologist that he’s not doing his job well. But when the epidemiologist runs things, he thinks that he’s the only one who understands reality. (RCCE specialist, Beni)

CF data had to be presented in atypical formats and were downgraded within evidence hierarchies because of its non-statistical approach to sampling. The epistemological perspective of the majority of clinical or epidemiological Ebola response decision-makers was seen by respondents to accentuate this issue and further limit the value and uptake of CF data.

Decision-makers and evidence uptake

The process of taking in and using evidence in the Ebola response was affected by decision-makers’ time and bandwidth to parse the large volumes of data coming from a range of different sources. Furthermore, the size and the organisational structure of the response affected its momentum, insofar as changes to policy and strategy required buy-in from a large number of individuals and agencies and extensive retraining of staff on new policies and SOPs, negatively impacting the speed of changes.

Bandwidth of decision-makers to absorb evidence

Amid an Ebola response that spread across multiple districts, with more than ten technical pillars and dozens of responding agencies, Ebola response decision-makers were felt to be incredibly busy and there was compassion for their workloads and the challenges they faced in trying to consider data and evidence from a wide variety of sources, including CF. Engaging with routinised systems of CF was quite new for many in the response:

… to be fair to us and all of us who are working this response and to the people who are working in the response, they are not used to having this data in a response, they are not used to having to process CF … so this is a new data stream for epidemiologists and everybody in the response but they’re like, what the heck are we supposed to do with this! … so we’ve had to learn how to make [CF data] meaningful and then trying to balance, how do you make recommendations from the data …(technical advisor, Goma).

In a landscape of highly competing agendas and with an overwhelming number of data points, sources and recommendations, greater ‘community resistance’ to outbreak response interventions was found to guarantee the attention of decision-makers.

If the community complains, for example … you have to wait for the number of complaints to be flagrant enough to get the attention of the decision-makers. That’s an important element, because the more resistance there is, the more the decision-makers pay attention. And you have to wait for the resistance to multiply in order to consider the feedback. (RCCE coordinator, Beni)

In a context with ongoing conflict, where Ebola responders were not infrequent targets of violence, respondents felt that decision-makers were more likely to engage with feedback data when it was negative, but (as illustrated by the above quote) only when a certain threshold of negative feedback had been reached.

Evidentiary inertia

The overall culture of DRC’s Ebola response operated under the assumption that decision-making and the response structure itself were largely driven by evidence. However, the ability of the Ebola response to use evidence with sufficient agility, or indeed at all, was contested by some interviewees, particularly those who had been working in the response for many months. The repetition of old problems in new areas was a source of real frustration to many:

I don’t agree that it’s an evidence-based response because often at times I mean we’re finding … other health zones … that become hot spots are going through the exact same challenges previous ones have and are not applying lessons learned, which is … so important. (technical advisor, Goma)

This was expanded on by another individual, who felt that the perception that the response was evidence-based and community-led was tokenistic, since they felt that the CF was insufficiently acknowledged and acted on:

… it’s been ridiculous that in all documents it says that communities are at the forefront of the response and putting people centre, and it’s just not happening at all … there is more demand now so I think that I see a bit of a shift … and I think this is why this went so slowly, because people would not think that [feedback] is relevant … but even though we are working in such a complex situation where it’s all about security and access, which is often like an argument for people to look at the [CF] data. (technical advisor, Goma)

This perceived sluggishness in accepting the utility and value of the CF data contributed to an overall concern that the Ebola response was not evidence-based in its approach, even when that evidence could have security implications.

Once a particular strategy had been put in place, even if found to be ineffective or if new evidence did not support its continued implementation, it was felt to be very hard to change course:

In my opinion, we have community feedback that indicates we should spend some time on community dialogue before deploying response measures; otherwise, they won’t be as effective as we want them to be. I think that’s something that’s changing very slowly, but it takes time, because the response is like a big ship, and when you want to turn the rudder, by the time you do, and the response shifts, a long time has passed. We have to find a more agile way, a faster one, so that our strategic changes can become operational ones. (programme coordinator, Beni).

While the DRC’s Ebola response was positioned externally as evidence-based in its approach, many respondents felt that this was not the case: lessons were not learnt and applied in new outbreak areas, and that this was particularly the case for CF data. While lip service was paid to the importance of CF and dialogue, some respondents saw this as tokenistic, particularly when it was ignored even in the face of insights into potential security risks for health and humanitarian responders.

Integrating evidence at the operational level

Strategic decision-making was operationalised into action by a variety of different coordination hubs. These actions could be hampered by a number of factors, from the large number of actors involved in the response, lack of technical know-how or insufficient coordination. However, respondents did identify good examples of CF evidence use which could be built on to develop recommendations for outbreaks in future.

Challenges to implementing change

The slow speed of change of the Ebola response was a frequent concern for many respondents, though the reasons for the delays in implementation of evidence-based change were often beyond the control of anyone actor. These were sometimes reported to be related to a lack of resources, as exemplified by challenges associated with developing and reviewing communications materials:

For example, imagine: there’s no leaflet on Ebola [in the response] … Because the Communications pillar and the Coordination don’t have the resources … Because if we had resources, we’d be able to hire experts to provide some interesting support…Because expertise, in Communications is … I wouldn’t say it doesn’t exist, but it’s rare. (communications expert, Beni)

At other times delays were attributed to the lengthy process of validation to ensure the right actors were involved in making a change. One respondent discussed the challenges with long timelines in generating evidence for action in the context of the RCCE pillar, and the ways in which the bureaucracy inherent to Ebola response decision-making sometimes made it difficult to make operational changes at any appropriate speed:

[The RCCE pillar] validate the data [and recommendation] … So we adopted their recommendation and worked on revising our messages … It took a long time because there was data collection, which was then shared on another level, and that’s when we decided to organize a workshop for revising the messages, because we couldn’t do that ourselves … The government has to approve it. So we set up a workshop, and after it was over, we made drafts, and after those drafts were done, the corrections were done, we’d send them to the Coordination for approval. After that, we started the production process, and its development on the ground. It took us from May to August to finish that process. (RCCE specialist, Beni)

New approaches to operations were often also not well communicated to response field staff, which could act as a direct barrier to the implementation of change:

There are decisions made by the [government body]. They issue memos. But I’ve never seen the coordinators go on the ground to instruct the departments on the memos. Ever. We see the memos in our inbox. Those of us on the ground have to take those memos and meet with the Communications Sub-pillar, and ask them, “Have you seen this?” They never have. So we’re the ones who have to say, “Listen, this is a circular from the [government body]. And now, we have to adapt to it”. (RCCE specialist, Beni)

This issue of communication between strategic and field levels also reflects poor coordination between those levels, requiring the intervention of additional actors, further adding to the challenges of timely implementation.

Strategies for successful uptake of CF data

Respondents discussed ways in which CF data were more likely to result in direct operational changes, and these varied in approach from ad hoc to more strategic engagement with different pillars.

Creating demand for CF data was found to be helpful in gathering the support needed to create recommendations that could be implemented at the field level. The CF was of particular interest to the security pillar, which could then use this information to adjust on-the-ground strategies for response staff deployment:

Regarding the threats, we share this information with the Security Committee. We go to that Committee to tell them, “Look, this is what we received”. And we pay a lot of attention to incidents, because the threats become almost constant. The interesting thing is that this system allows us to see the intensity of the threats. For example, if we get more threats this week, we get concerned and look into what’s happening, and raise an alert … We can’t say it’s 100% reliable … But we think that sharing that information and taking action is always better than doing nothing and facing problems later. (community engagement specialist, Goma)

Actions developed as a result of CF could then be developed in concert with the other pillars of the response. Where possible, the cocreation of recommendations between CF actors and pillar leadership was felt to be helpful in getting findings taken up and used, as opposed to simply presenting findings and asking the pillars to develop their own recommendations, as this respondent explained:

We set up these community feedback groups and asked the networks to have everyone, especially the [pillar] president, participate in this group. Because once they participated, together we could view the feedback from the community. We could analyse it together and, on the basis of this analysis, everyone could see there is this or that problem, question, concern regarding specific pillars. … And that would get them to take action, to get involved and think of options, make decisions and ask all the actors on the ground to figure out how to respect the concerns and desires of the population. And that’s how, over the course of the meeting, we could formulate recommendations. For example, regarding the … collection of elements that need to be burnt in [infection prevention and control activities] … We make recommendations to the IPC [pillar]: ‘This method is not well-received by the community … And instead of getting the community involved, it makes people withdraw. So you have to change things in order to get the population involved in our measures (RCCE specialist, Goma)

The success of some IPC recommendation changes was felt by respondents to be linked to the engagement of the pillar lead in the process of generation of recommendations, demonstrating the importance of modelling good leadership. Good leadership was also seen in the relative speed of changes in response to CF in the SDB pillar. Respondents attributed this success to the fact that that pillar was co-led by the Red Cross, so they were able to push for change based on the feedback they were collecting more effectively than in other pillars where their influence was more distal. There were several examples of the rapid integration of CF into SDB protocols, such as this one, described by a community engagement expert and Beni local, around the local importance of burial rites:

Despite the fact that there’s an epidemic, the community wishes to preserve the way they honour the dead. To us, honouring the dead means being able to see them … The important thing is that the response, through the SDB teams, was able to let a family member participate in all of the process. That is to say, how the body is wrapped, how it is dressed, how it is cleaned, how it is placed in the body bag. Furthermore, the response has changed the kind of body bag used, by adding a window through which one can see the deceased person’s face, which makes it possible to continue honouring funeral rites here in North Kivu. (programme coordinator, Beni)

Another example of success in changing protocols based on CF in the SDB pillar was explained by an SDB team lead in Beni, when asked about burial of the dead in coffins:

… [provision of a coffin] depends on whether the family asks for it. Because a family may say, “We have our own coffin”. Or “We have our own family grave”. So we go with the family to where they want to bury the dead … it was very hard at the start, because it was hard for the family to have the body. The teams would arrive with coffins, the teams carried the bodies and buried them, and all the family could do was watch. Today, the positive change is that we give back the body after we have rendered it safe. Once the body is [confirmed] negative, we can give it to the family for the burial. We may not be sure where they bury the coffin, but we give the family gloves, so the family can bury it. And we send someone to observe the whole burial process. Because, since the family doesn’t have instructions on how to take off the gloves and all that, or where to put them after the burial, that person will be in charge of collecting all the gloves and [ensuring safe disposal]. We made that change to earn the families’ trust within the community. It was a community suggestion once they too wanted to participate in the burials. (SDB coordinator, Beni).

This respondent felt that it was possible to make these changes quickly because they were hearing from CF as well as from the SDB teams on the ground that these changes were being asked for by the community. Hearing similar messages from different sources added weight to the discussions with Pillar leadership to change operational protocols for SDB.

Discussion

This analysis of the Red Cross CF system has allowed for broader lessons to be drawn out relating to evidence production, its value, use and operationalisation in the 2018–2020 North Kivu and Ituri DRC Ebola outbreak. The scale of this CF system is dramatically different to previous and other similar systems that have been documented in the past, with more data points allowing for improved analyses of trends in feedback as well as granular analyses (‘deep dives’) of issues of particular importance to the response leadership. However, given that the Red Cross CF system was both novel and produced such a large volume of data, the way in which the feedback was considered for decision-making and operationalised into policy change was still being improved in the latter days of the Ebola outbreak in Eastern DRC. There were two broad areas which led to challenges in getting CF from production to utilisation: (1) production and presentation of evidence, and (2) policy-maker decision-making and then operationalisation of the evidence.

Overall, the Red Cross CF system adapted well to local response needs and changes, by ensuring that analysis was conducted and shared at the local level (through health zone coordination structures) to make local level response changes while ensuring that contextual knowledge was not lost, and by developing wider thematic briefs in response to strategic coordination requests. However, challenges arose as a result of the ‘branding’ or positionality of the CF data as being owned and produced by the Red Cross, and likely contributed to delays in adoption of the feedback data as meaningful evidence in the early days of the Ebola response. As the Red Cross’ reputation as a data-generating organisation grew (with the support of the US CDC), and CF data were integrated with other social science data, the respect for the information also grew. This was also likely linked to strategic changes made at the response coordination level, when the Red Cross and NGOs were brought into strategic coordination in a formalised way (following the establishment of the UN Ebola Mission in May 2019).18 21 IFRC’s strong background in community engagement in outbreaks also likely contributed to the Red Cross CF initially only being considered as relevant for the RCCE pillar of the Ebola response, as opposed to being able to feed actionable insights into different pillars’ activities and decision-making. This is unsurprising: operational social science data (such as CF) is often siloed into the RCCE pillar during outbreaks, which has been substantially critiqued.22 Efforts to merge the Red Cross’ system with other systems of CF were perceived to be time-consuming and inappropriate given different approaches to data collection, and given the uptake and advocacy efforts required to influence decision making with this novel dataset, merging multiple feedback datasets would have presented a substantial opportunity cost.

Making qualitative CF data more palatable to the Ebola response leadership, who often came from highly quantitative backgrounds, was an additional learning process for those advocating for improved use of the data. Tensions existed between wanting to make the data easy to digest, essentially by ‘quantifying’ it and losing much of the nuance, and the desire to present the data in all its complexity. These tensions reflect longstanding debates in quantitative and qualitative research about the appropriateness of the quantification of qualitative work.23

Qualitative ‘complaint’ data in the securitised North-Kivu and Ituri environment (where complaints could forewarn violence) had additional weight in the response, but, according to respondents, only when there were quantifiably enough complaints to reach above a certain threshold. The potential consequences of this, where there must be a substantial number of complaints to spur action, could lead to response workers and civilians being put at risk. This links to further debates in the field of outbreak response, where social scientists and anthropologists who focus on listening to CF are only considered to be of use by the response when their work relates to the prevention or lifting of community resistance.24

The fact that the leadership of the security pillar did find the CF so useful is a boon for this novel system, especially given the potentially dire consequences for response staff of not listening to community concerns. This also appears to have been the case among some of the other pillar leads, who saw that their field teams were unable to accomplish their tasks and goals without being accepted by the local community, leading the CF data to become a highly valued source of information. However, this research has suggested that this was not true for all pillars, with some pillar leads failing to participate in feedback meetings, despite encouragement from coordination leadership.

The challenge of transforming evidence into policy and operational action has been extensively documented in humanitarian contexts,6 and it is therefore not surprising that a new data source like CF was not easy to integrate into decision-making in the early days of the Ebola response. However, as the evidence became more trusted and used to develop recommendations, the slow pace of change in the response even in the face of evidence suggesting a change was necessary, belied the concept of an evidence-based response. Some Infection Prevention and Control activities like the burning of people’s goods during Ebola home decontaminations, were long known to be a flashpoint for community anger anecdotally and through community-based research, and yet it still took many months for policy changes to take place to stop this action.2 Policy change required engagement and buy-in from a vast number of actors, processes and validation steps. Operationalising any policy change in turn required substantial communication, coordination and training of field staff. Taken together, these processes could take so long to accomplish that communities and response workers often felt that protocols were entirely inflexible, despite all the evidence that a given policy change needed to be made. This ‘evidentiary inertia’, whereby even credible and voluminous evidence is insufficient to drive changes in policy or operations, emanates from the size, structure and complexity of an epidemic response such as that deployed in the North Kivu and Ituri outbreak.

Recommendations

Our foremost recommendation is that CF systems should be considered for deployment in future outbreaks, whether large or small. However setting up a CF system like the one referenced here can be time-consuming as well as being logistically and financially burdensome if implemented on such a large scale, and may not be suitable for smaller outbreaks or in some contexts. The scope and scale of the CF system should be aligned with the severity of the outbreak and the resources available, with a global discussion of thresholds that would trigger the deployment of a basic system, or scale up of a more complex system as an outbreak progresses. By capacitating countries with a basic package of tools to set up a CF system in ‘peacetime’, the potential time lag to roll-out in the event of an emergency would be reduced.

By normalising CF systems in outbreaks, there is likely to be an increased uptake and use of such data to make strategic and operational decision-making. These CF systems should be linked to wider social science efforts, from rapid qualitative work and anthropology, to other sources of community perceptions data (including knowledge, attitude and practices surveys) that are widely used in outbreak settings.25 These varied sources of data can be used to triangulate against CF data and to create evidence briefs and other knowledge products.

A recommendation both for the Red Cross and for other CF systems in general, is that while the Red Cross CF system did not take a sampling approach for logistical and operational reasons, moving in that direction could facilitate more trust in the data, especially by response staff who are less familiar with qualitative methods and approaches. Partially separating the CF system from standard community engagement activities would allow organisations to rapidly scale up or down their CF work, independent of their other activities. The Red Cross or any other organisation that is engaged in CF must have the capacity to conduct robust and rapid analyses of social science data, through partnership or by developing this skill in-house.

To address challenges and issues identified in future outbreaks this research offers the following recommendations where CF systems are used:

  • Engage humanitarian organisations that are collecting CF (eg, DRC Red Cross) in strategic and operational coordination structures from early on in outbreak response.

  • Response leaders should be trained in the use of multiple data types. Integrating qualitative data training in epidemiological training programmes (eg, Field Epidemiology Training Programmes) as well as in outbreak response training programmes (eg, WHO’s Incident Management System) would be a first step in this direction.

  • CF collection, analysis and interpretation should be seen as a particular technical skillset and should have clear SOPs so that any actors engaged in this space in outbreaks are able to feed their data into a centralised system.

  • Test different approaches to the presentation of CF data in future outbreaks to create templates that can be easily interpreted by different audiences including response leadership and quantitatively trained (as well as qualitatively trained) staff.

  • Establish CF as a key source of intelligence across outbreak response pillars (not just in RCCE) and ensure tracking systems for recommendations are used and acted on both at the strategic and operational levels. This would therefore likely sit under the broader monitoring, evaluation and accountability function of the overall response.

The Red Cross CFS benefited from an end-to-end learning system, where adjustments in the approach were made based on ongoing (though informal) process evaluation, and where data were used weekly to shape messages and train staff. This willingness to adapt and grow an approach over time as new evidence and learning is uncovered is to be lauded, and should be a part of outbreak response culture.

Limitations

The primary research for this piece took place over 2 weeks in two locations of the North Kivu Ebola response: Goma and Beni. Despite sincere efforts, the extreme workload of much of the response leadership in managing multiple flare-ups of Ebola in different geographies meant that it was not always possible to interview staff involved in higher levels of responsibility for strategic decision-making.

This research focused on the use of CF data by formal coordination structures, and therefore did not specifically look to document the small, day to day changes in response actions made at the field level based on feedback.

As a result of insecurity, it was not possible to include interviews with those providing CF (eg, community members), and this was not within the scope of the study. Due to the COVID-19 pandemic in 2020–2021, planned validation workshops to share the results of this research did not take place.

The research team conducting this study were well positioned due to their previous experience in the DRC and West African Ebola outbreaks. Possible potential biases may be related to their previous interactions and affiliations with responding NGOs and UN Agencies. While the IFRC were the hosting agency for this research, the lead researchers worked to maintain distance from the IFRC through reflexive journaling and through frequent discussions of potential bias arising from the relationship with the organisation being studied. Staff conducting the research had previously been involved directly in the response, and therefore may have, due to a sense of responsibility for the response’s performance, have been positively biased towards it.

Conclusions

When CF is given to the right decision-makers in an outbreak, in a format that they can understand and use to develop clear recommendations, it can be a highly valuable tool for outbreak response. CF data have both instrumental value insofar as they can be used to improve outbreak response operations, and intrinsic value in respecting and being accountable to communities.26 However, challenges of absorptive capacity for new evidence, the loss of contextual information when qualitative data are quantified and the reputation presenting the data can make it difficult to get such evidence considered in policy decisions. Furthermore, once feedback evidence is considered and policy is made, the challenges of slow operationalisation of policy change can lead to frustration on the part of communities and response staff that there is a functional accountability mechanism and that change is coming.

Data availability statement

No data are available. Due to the sensitive nature of qualitative research and difficulty anonymising participants, these data are not publicly available.

Ethics statements

Patient consent for publication

Ethics approval

This study involves human participants and was approved by ethics committee, University of Kinshasa Public Health School (Ref: ESP/CE/264/2019); London School of Hygiene and Tropical Medicine (Ref: 17762). Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors would like to thank the DRC Red Cross volunteers and staff for all of their efforts in developing, implementing and improving the community feedback system during the incredibly complex 10th DRC Ebola outbreak in North Kivu and Ituri. We would also like to thank Christine Prue, Giulia Earle-Richardson and Vivienne Walz for their review of this article.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Handling editor Seye Abimbola

  • Contributors FC, HR, GM, OB conceptualised and designed this study. Data collection and acquisition by GM, OB, CAC, EE, LR. GM and HR were responsible for data analysis and interpretation, and drafted the manuscript. All authors contributed critical revisions, approved the final draft and are accountable for the work. Funding was secured by FC, HR, GM, OB. GM is the overall guarantor for this work.

  • Funding This research was funded by the Elrha’s Research for Health In Humanitarian Crises (R2HC) Programme, which aims to improve health outcomes by strengthening the evidence base for public health interventions in humanitarian crises. R2HC is funded by the UK Foreign, Commonwealth & Development Office (FCDO), Wellcome, and the UK National Institute for Health Research (NIHR). GM receives doctoral funding from the Pierre Elliott Trudeau Foundation, Montreal, Canada. HR is a member of the UK Public Health Rapid Support Team which is funded by UK Aid from the Department of Health and Social Care and is jointly run by UK Health Security Agency and the London School of Hygiene & Tropical Medicine.

  • Disclaimer The views expressed in this publication are those of the author(s) and not necessarily those of the Department of Health and Social Care.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.