Graphical illustration using a simple quantitative tool to quantify the value of research
Cost-effectiveness analyses are routinely used to assess whether a programme is expected to improve population health once the health opportunity costs imposed by additional programme spending are accounted for. This assessment can be summarised using an estimate of the net disability-adjusted life years (DALYs) averted by the programme. This reflects both the health benefits of the programme and an assessment of the health forgone as funding a programme means that resources will be unavailable for the delivery of other programmes. This is calculated as the DALYs directly averted via the programme minus the DALYs incurred elsewhere in the health system due to the additional programme funding required.
In the same way, we can quantify the net DALY impact of investing in healthcare provision, we can also quantify the net DALY impact of investing in research. This idea is the basis for value of information analysis.
To assess the value of a research study or other data collection or evidence gathering activities, we need to understand the types of uncertainty that we could examine in a study with particular endpoints. These endpoints may be epidemiological, clinical, patient reported, process related or economic. For example, we might be uncertain about the effectiveness of a drug, the uptake of a rural community-based prevention programme, the quality of life of people with different treatment outcomes, or the cost of implementing a new diagnostic pathway. To assess the value of improving information relating to an endpoint, we need to understand our current level of uncertainty about the endpoint given existing evidence. This uncertainty can be described by a probability distribution showing the likelihood that the endpoint takes different values. This distribution is often called a prior, since it is based on existing knowledge of uncertainty about the specific endpoint. Figure 1B shows the prior on an uncertain endpoint as a histogram.
Figure 1Calculating the net health effects of research. Legend: (A) shows net disability-adjusted life years (DALYs) averted by the programme for different values of the endpoint of interest when the programme is expected to be cost-effective based on current evidence; (B and D) show the prior on the uncertain endpoint; (C) shows net DALYs averted by the programme for different values of the endpoint of interest when the programme is not expected to be cost-effective based on current evidence.
Uncertainty about the endpoint alone is not sufficient to justify expenditure on research. For research to deliver value, the uncertainty in the endpoint must translate to uncertainty about whether the programme is cost-effective. For example, we might be highly uncertain about a programme’s effects on clinical outcomes. However, if the programme is cost-effective across the range of plausible clinical outcomes then further research on this endpoint may not deliver value in this setting as it would not change the decision about funding the intervention.
We can assess whether uncertainty in the endpoint is likely to translate to uncertainty about cost-effectiveness by estimating the net DALYs we would expect to avert if the endpoint was found to take the different values reflected in the prior. This is shown in figure 1A. In this illustration, as the endpoint increases, the net DALYs averted increase. This reflects estimates of how both DALYs averted and additional costs (or cost savings) change with the value of the endpoint. It also reflects a measure of the health opportunity cost of financing the programme, as this allows the additional costs of the programme to be converted to health foregone.
The mean value of the endpoint represents our ‘best guess’ of the value the endpoint takes given currently available information. At this value the net DALYs averted by the programme are positive and the programme would be considered cost-effective. However, below a certain ‘trigger’ value of the endpoint, the net health effects of the programme become negative, that is, the programme is not cost-effective. The shaded area of the prior histogram (figure 1B) indicates the probability that the endpoint will fall below the trigger point. This is the probability that the intervention will turn out not to be cost-effective and that implementation will reduce population health. However, if we conduct research to improve our understanding of the endpoint this is the probability that the research could change the implementation decision. If it is considered implausible that the endpoint could take a value as extreme as the trigger point then further research will not result in a change in decision and, therefore, based on the available evidence, may not be considered an appropriate use of resources. This emphasises that we should care about uncertainty in endpoints when it leads to uncertainty in decisions.
Without additional research, on average implementation averts DALYs but if low values of the endpoint are realised, implementation reduces population health. If research is conducted and indicates that the endpoint falls below the trigger point (i.e. the programme is not cost-effective), then the programme will not be implemented. Research therefore avoids the health losses associated with programme implementation under these conditions as shown by the grey bars in figure 1A. These bars, therefore, represent the potential health gains from research. The expected net DALYs averted via research are calculated as the health gains (resulting from avoided health losses) when the endpoint takes values below the trigger point, that is, the shaded bars in figure 1A weighted by the probability of the quantity taking each value below the trigger point, that is, the shaded bars in figure 1B.
Figure 1C, D show how the value of research can be calculated when the programme is not expected to be cost-effective based on current information. Without further research the programme is not implemented and no population health gains are generated. With further research, there is a possibility that the endpoint will take values sufficiently high to support implementation and net DALYs are averted. The possible health gains from research are again shown by the grey bars (figure 1C).
This method shows the value of completely eliminating the uncertainty around the endpoint. Although in reality further research will not resolve all uncertainty, the estimates generated provide an expected upper bound for the population health benefits from research for the setting of interest.
We express the value of the research proposals using two different metrics. The first is the net DALYs averted by using the research to improve decision making. Where a research study is expected to be used in a number of countries, the approach described above can be applied for each country and the net DALYs averted across countries can be calculated. Individual country estimates of the net DALYs averted by research are likely to differ for a range of reasons including differences in the size of the population that stand to benefit from research, the costs and health benefits of the programme and the health opportunity costs of healthcare funds.
The net DALYs averted by research provides an estimate of the expected maximum population health gains from research accounting for both health gains and programme costs, but it does not consider research costs. Funding a specific research proposal has opportunity costs, which are the health gains that could be generated by using this funding for other research studies.
The second metric is, therefore, the maximum amount a research funder should be willing to spend on the research, given its estimated net health effects. This metric is estimated by multiplying the net DALYs averted by research by a measure of the opportunity cost of research funds. We assume that research funds have similar levels of opportunity costs as funds for service provision. For example, if a research study is expected to avert 1000 DALYs and our measure of opportunity costs indicates that every US$500 of expenditure results in an additional 1 DALY being incurred elsewhere in the health system, then the maximum a research funder should be willing to spend on the research would be US$500 000. If they spend more than this the health opportunity costs of funding the research would exceed 1000 DALYs and thus, more than outweigh the net health gains from research. Given the very different sources of funding that typically underpin service provision and research, the opportunity cost of research funds may differ from the opportunity cost of service funding. We will return to the question of how the opportunity cost of research funds could be estimated in the discussion.
To illustrate the approach, we use a numeric example where we are interested in an outcomes endpoint that can, in principle, take different values between 0 and 1 (eg, the probability of treatment response). Our existing knowledge of the endpoint indicates it is expected to take a value of 0.10 (SE 0.04, 95% CI 0.04, 0.19), which allows us to define its prior (we apply a beta distribution here). In a second step, we make use of existing information about how different values of the endpoint influence health effects and costs of the programme. In the present example, we know that if the endpoint takes the average value, the programme is expected to avert 2000 DALYs. If the endpoint takes the value at the lower bound of the CI the programme is expected to avert 1000 DALYs, whereas if the endpoint takes the value at the higher bound of the CI, the programme is expected to avert 3000 DALYs. The expected additional long-term cost associated with the programme is US$450 000 and is not expected to vary with the endpoint. Lastly, we evaluate the health opportunity cost associated with funding the intervention. This is 1500 DALYs based on additional costs of US$450 000 and an estimate of health opportunity cost of US$300/DALY. This information about the DALYs averted at different values of the endpoint, and about opportunity costs, allows us to estimate the net DALYs averted at different values of the endpoint. We provide a simple Microsoft Excel tool to allow users to review the numeric example and apply the approach to their own contexts. This tool is available in the online supplementary material, for the most up to date version of the tool see https://www.york.ac.uk/che/research/global-health/methods-guidelines/%23tab-4. The tool provides a graphical summary of the prior information and the relationship between net health effects and the endpoint of interest as shown in figure 2.
Figure 2Output of quantitative excel tool for calculating the net health effects of research. DALYs, disability-adjusted life years.
The tool uses regression methods to generate estimates of the net health effects of a programme at all plausible values of the endpoint. The regression uses estimates of DALYs averted and additional costs at different values of the endpoint that are entered by the user. Two regressions are then fitted, one regressing DALYs averted on the endpoint and the other regressing additional costs on the endpoint. Options are available to use linear regression, or to assume range of non-linear relationships between the endpoint and DALYs averted or additional costs.
The tool uses the data entered to generate estimates of the benefits of research. The tool shows the implications of making decisions based on current evidence, and the potential benefits of making decisions on the basis of further research as shown in figure 2. Without further research we can only base our decision on what we expect to occur. We expect that the programme averts 1868 DALYs (the expected health benefits (1868) are not identical to the health benefits at the mean value of the endpoint (2000) as the beta distribution used to describe the endpoint is not symmetrical) with a health opportunity cost of 1500 DALYs, that is, 368 net DALYs averted. On this basis, we implement the programme based on current evidence. If we conduct research, we will gain more information about which value the endpoint takes. If the endpoint is as expected or higher, there is no change to the decision. If the endpoint is lower than the trigger point of 0.07, the net DALYs averted become negative and we choose not to implement the intervention. Weighting the probability of observing values of the endpoint below 0.07 by the net DALYs averted by avoiding implementation, we expect the research to avert 59 DALYs. If the research is only considered relevant in this context then the maximum a research funder should be willing to spend on the research is US$17 800, suggesting that this may not be a high priority area for research. If the research is expected to inform decision making in other countries, then the process can be repeated for each country, and the value of research across countries can be calculated.
Guidance for gathering evidence to inform estimates of the value of research
As shown above, a necessary part of any assessment of the value of research is formulating a view on the current level of uncertainty about the endpoints the research will examine. This uncertainty can be represented as a prior distribution. Evidence from existing studies including pilot studies or systematic reviews can be used to formulate priors. In practice, however, many research studies examine combinations of interventions and contexts which have not previously been studied. When evaluating a specific research proposal formally elicited expert opinion11 12 may, therefore, be valuable to complement quantitative and qualitative information to formulate priors.
It is also necessary to estimate how the health benefits and additional costs of the programme change with the endpoint. Where a cost-effectiveness model is available, this can be obtained by conducting one-way sensitivity analysis, that is, varying the values taken by the endpoint of interest and recording the corresponding variations in health benefits and additional long-term costs associated with the intervention. If a cost-effectiveness model is not available for the context of interest, or existing models cannot be easily adapted, then formal expert elicitation can be used to quantify the magnitude of health benefits and additional costs at different levels of the endpoint.
In order to estimate the net health effects of programmes, we require an understanding of how additional programme costs translate to health opportunity costs. Recent work has estimated the opportunity cost of domestic healthcare spending in a wide range of LMICs.13 Where programmes are funded via overseas aid the opportunity costs of this funding will depend on the remit of the funder. An understanding of the potential health opportunity cost of an overseas aid funding stream can be garnered by reviewing the cost-effectiveness of those interventions that are and are not currently funded, and potentially developing a cost-effectiveness league table of funded programmes.
Specification of each element described above is likely to require judgements regarding which evidence is relevant and how to use that evidence. By using the tool provided, users can explore the sensitivity of their results to each of these elements. In some contexts, the time-sensitive nature of a research-funding decision, analyst capacity or funding availability, may make it infeasible to assemble these types of evidence. In these contexts, the tool can provide a quantitative basis for testing how different assumptions influence both the net DALYs averted by the research and the maximum amount a funder should be willing to spend on the research.
We now show how the approach can be applied to a specific example. In this example, evidence is available from a cost-effectiveness model but no probabilistic sensitivity analysis has been conducted thus prohibiting use of standard value of information methods.
Self-testing example using the HIV synthesis model
We show how these methods can be applied to assess the value of research in HIV self-testing programmes in Malawi. Self-testing programmes have been the subject of a number of recently published and ongoing research studies in sub-Saharan Africa (for some examples see refs. 14–18). We use the HIV synthesis model19 20 which has been used to assess the cost-effectiveness of a range of HIV prevention and treatment investments in different settings. The self-testing programme under evaluation is not currently part of the HIV investment strategy. We assess two possible scenarios to estimate the population health benefits from research studies on self-testing programmes. Under the first scenario, no research is conducted and investment in self-testing is based on current evidence about the costs and benefits of the programme. Under the second scenario, research is commissioned and the results of the research inform the decision about investment in self-testing.
Studies of HIV testing have included a range of endpoints measuring intervention effectiveness and costs at different points in the cascade of care. Frequently reported endpoints include coverage and uptake, HIV positivity, linkage and retention in care, and programme costs.18 The cost-effectiveness of self-testing is strongly linked to the cost per new HIV diagnosis21 which is calculated as the programme cost per person divided by the proportion of people diagnosed with HIV as a result of the programme. This suggests that two endpoints: programme costs and the proportion of people diagnosed with HIV, are likely to be important determinants of whether testing is cost-effective and therefore important targets for further research. The proportion of people diagnosed with HIV within facility-based care as a proportion of those targeted for testing reflects the combined effect of multiple endpoints collected within testing studies such as uptake, HIV positivity within those tested and linkage to facility-based care. We, therefore, examine a cost study focused on the cost of the self-testing programme per individual eligible for testing; and an outcomes study estimating the proportion of the eligible population who are diagnosed with HIV in facility-based care.
To evaluate the research proposals, we require priors describing the uncertainty about both programme costs and the proportion of the eligible population who are diagnosed with HIV in facility-based care. These priors will depend on the characteristics of the target population and implementation setting, the details of the testing programme such as whether measures to enhance linkage are proposed (eg, financial incentives, community-based support) and other contextual factors. The priors will, therefore, depend on the exact details of a specific research proposal and are most likely best formulated by combining available data, qualitative information and expert opinion. For the purposes of this demonstration, we use only data from the literature to inform the priors. We use data from a systematic review and meta-analysis,18 focusing on those data relating to self-testing. This work reflects the fairly limited data on self-testing available in 2015, when many of the self-testing studies were designed. For further details see online supplementary material S1.
Estimates of the additional costs and DALYs averted by a self-testing programme were derived from the HIV synthesis model. This is an individual-based stochastic model of heterosexual transmission, progression and treatment of HIV infection. We used outputs from the model generated by the ‘Working group on cost effectiveness of HIV testing in low income settings in sub-Saharan Africa’21 which examined the effects of expanding HIV testing beyond a core testing programme considered to represent current standard of care in many countries. This core testing programme included testing for: pregnant women, symptomatic individuals, female sex workers (although this is not fully implemented in many countries) and men coming forward for circumcision. This work examined the relationship between cost per HIV diagnosis and long-term cost effectiveness. The demographics of the population and the HIV epidemic features were based on those for Malawi and the model is calibrated to data that are representative of this setting. This work examined the cost-effectiveness of testing for a wide range of scenarios. The scenarios reflect variation in the expanded testing programme testing rates, how well the programme targets HIV positive individuals and cost per test. The scenarios also reflect uncertainty about the context in which the programme is implemented in terms of the nature of the epidemic, ART programme characteristics and the core testing programme. The model time horizon was 50 years and a discount rate of 3% was used for costs and outcomes.
We used the scenario analysis outputs from the model to estimate the relationship between costs and DALYs averted and both endpoints of interest (the proportion of the targeted population diagnosed with HIV in facility-based care and programme costs). For further details, see online supplementary material S2.
Estimating the net DALYs averted by self-testing, requires a measure of the health opportunity cost of the funds used to pay for self-testing. We have used a measure of opportunity cost of US$500/DALY. This represents the cost per DALY averted of those services we expect to be displaced by investments in self-testing. US$500/DALY is considered a relevant cost-effectiveness threshold for resource allocation within the HIV programme which is overwhelmingly reliant on overseas aid.21 22 Additionally, HIV investments which Malawi and other countries in sub-Saharan Africa have struggled to scale up often have incremental cost-effectiveness ratio (ICERs) around US$500/DALY, and HIV budgets have been shown to be exhausted in South Africa after funding interventions with ICERs around US$500/DALY.23 Where delivery of HIV interventions draws on resources that would otherwise be used for non-HIV health activities a lower threshold is more appropriate, we return to this in the discussion.
The analysis of the outputs from the HIV synthesis model were conducted in the statistical software R and associated packages.24–39