Keywords
Human-Centered Design, User-centered Design, Design Thinking, Evaluation, Adolescents, Contraception, Africa
This article is included in the International Conference on Family Planning gateway.
Human-Centered Design, User-centered Design, Design Thinking, Evaluation, Adolescents, Contraception, Africa
In the field of public health, human-centred design (HCD), also known as design thinking, is increasingly being used during intervention development, and sometimes implementation, to design innovative solutions to complex problems1. HCD is a creative approach to programme design that focuses on building empathy with the population of interest, and generating and iteratively testing many potential solutions to complex problems. The approach has a high tolerance for both ambiguity and failure1–3. HCD shares some characteristics with other methods used to design programmes such as traditional socio-behavioural research but tends to be more ‘nimble/flexible/iterative’ and less protocol driven4.
Intervention evaluation is an essential component of public health research and programming, and detailed guidance on evaluation approaches and methodologies exist5,6. However, the iterative and flexible nature of HCD potentially poses some unique challenges for evaluation1,3,7. and examples of evaluations of HCD public health interventions have so far been limited7,8. There are some parallels between evaluations of HCD interventions and evaluations of interventions incorporating adaptive management or quality improvement where the intervention can also change over time9–14. Also, programmatic interventions, as opposed to researcher-led interventions, often involve an element of flexibility in intervention implementation, as implementers strive to develop more context appropriate interventions, and respond to programmatic targets. In such interventions, the location and timing of intervention implementation is guided primarily by programme targets or local government priorities, and less so by a need to maximise the quality of evidence of impact15,16. Despite some overlap with other evaluation scenarios, we have identified some unique challenges to the evaluation of HCD-led interventions.
In this letter we discuss the evaluation opportunities and challenges identified during the ongoing A360 evaluation and make recommendations for future evaluations of programmes designed using HCD.
The intervention that is under evaluation is Adolescents 360 (A360), a multi-country initiative which aims to increase the voluntary use of modern contraceptives among adolescent girls aged 15–19 years. The A360 initiative is led by Population Services International (PSI) in collaboration with the Society for Family Health (SFH) in Nigeria, design partner IDEO.org, and the Centre for the Developing Adolescent at the University of California. A360 is co-funded by the Bill & Melinda Gates Foundation and the Children’s Investment Fund Foundation.
A360 set out to take a transdisciplinary approach to intervention development which involves the different discipline ‘lenses’ working jointly to create new innovations that integrate and move beyond discipline-specific approaches to address a common problem (Figure 1).
A360 aims to create context-specific, youth-driven solutions that respond to the needs of young people. Through an iterative design process, unique solutions were created in each of the A360 countries. In Ethiopia, Smart Start uses financial planning as an entry point to discuss contraception with newly married couples17. In southern Nigeria, 9JA girls provides physical safe spaces (branded spaces in public health clinics) for unmarried girls18, and in northern Nigeria, Matasa Matan Arewa targets married girls and their husbands with counselling and contraceptive services using maternal and child health as an entry point19. In Tanzania ‘Kuwa Mjanja’ (be smart) delivers life and entrepreneurial skills training alongside opt-out counselling sessions and on-site contraceptive service provision to both married and unmarried adolescent girls20.
The independent external evaluation aims to understand whether the A360 approach leads to improved reproductive health outcomes; generate evidence about A360’s cost-effectiveness in relation to other programming approaches; and capture how A360 is implemented in different contexts, and the experience of the participating young people and communities. It has four components (Figure 2)21:
1. Outcome evaluation which has a quasi-experimental design with before-after cross-sectional surveys in all four settings (Oromia, Ethiopia; Mwanza, Tanzania; Ogun and Nasarawa, Nigeria) and a comparison group in the two Nigerian settings22.
2. Process evaluation incorporating both traditional qualitative methodologies and youth participatory research23.
3. Costing study to understand the drivers of cost and a more extensive cost-effectiveness study within the outcome evaluation study geographies.
4. Engagement and research uptake.
HCD takes a phased approach to intervention development. In the ‘inspiration phase’ the designers immerse themselves with girls and her influencers to understand the issues. In the ideation phase they attempt to make sense of what has been learnt, identify opportunities for design and conduct an iterative cycle of prototyping of possible solutions and strategies. The most promising prototypes are then piloted. In the case of A360, implementation at scale involved a further ‘optimisation’ period where the selected solutions were further modified to maximise scalability and affordability. In A360, the inspiration and ideation phases took place over approximately 12 months (Sept 16-Aug 17) with piloting for a further 3-4 months (September-December 17). Optimisation and scale-up started towards the end of 2017/early 2018. Among other things, this lengthy intervention design period presented some challenges for the evaluation, which we explore in the sections that follow.
The outcome evaluation aimed to measure the impact of A360 on the voluntary use of modern contraceptives among adolescent girls aged 15–19 years in the study geographies. The outcome evaluation was designed when the implementers were at the ‘inspiration phase’ with final study protocols submitted for ethical approval when prototyping was still ongoing and before the intervention had been finalised (Figure 3). This outcome evaluation timeline was necessary in order to conduct baseline surveys in the four study settings in 2017 prior to scale-up of the finalised interventions. The implications of the concurrent intervention development and evaluation design were that key pieces of information about the intervention components and the theory of change were unclear and changed over the protocol development period. Detailed information on the intervention were needed to inform the setting for baseline surveys and the kinds of questions to be asked during the surveys. The OE study design was finalised before the interventions themselves so there remained a risk that the evaluation was not optimal.
A key consideration for the baseline survey design was who would be targeted with the A360 intervention in terms of geographical location and demographic characteristics. Early on in the intervention design process it became clear that A360 would be targeted towards only married or unmarried adolescent girls in some locations though uncertainty remained through early phases of the design process. For example, in Northern Nigeria, the outcome evaluation focused on married adolescent girls as the initial intention of A360 was to target only those who were married, however, 9ja Girls, the intervention that was designed in Southern Nigeria to target unmarried adolescent girls, was eventually also implemented in Northern Nigeria. In Ethiopia, initial HCD insights pointed towards a focus on unmarried adolescents in school but other programming pressures including donor preferences led to a later shift in focus to married rural women. The outcome evaluation was designed to target all married 15-19-year-old girls, but the final intervention focused on ‘newly married’ adolescent girls. While some change in target population might be expected with non-HCD programmatic interventions, the HCD process provided the implementers with more freedom and confidence to make additional changes. It was challenging to determine the geographies that would receive A360 as the implementers were in the middle of an intense phase of solution prototyping and were not yet thinking about larger scale implementation of A360.
The level of implementation of the intervention was also relevant for study design, e.g. would A360 be implemented at health facilities, in schools or in the community? Given the uncertainty as to how A360 would be implemented, we opted for a community-based survey, which turned out to be appropriate for the final A360 interventions. However, if A360 had been implemented primarily in health facilities or schools then evaluating the intervention at the broader community level might not have been the most efficient study design.
The uncertainties in key study design parameters meant that we had to develop multiple study design scenarios which were repeatedly revised as new information came to light. For example, initially the intervention was to have distinct implementation phases as it was rolled across Nigeria and we explored the idea of conducting a stepped-wedge trial. Following further discussions with the implementers, it became clear that there was no certainty that roll-out would be in a phased manner as they did not know yet what the intervention would comprise. In Tanzania, we explored whether a regression discontinuity design might be possible if the intervention were implemented in one or several of the existing demographic surveillance sites, however, given the uncertainty around the nature of the intervention, the implementers were reluctant to commit to implementing in those areas. This prolonged study design process entailed some added costs and was at times frustrating for all involved.
The process evaluation was aligned to the HCD-driven phases of the design process for A360 and had the primary objective of presenting a descriptive and analytical account of how the implementation of A360 has played out in relation to its Theory of Change23. The process evaluation attempted to understand both the intervention development process and intervention implementation. During intervention development the process evaluation team faced challenges as the fast-paced, highly iterative HCD process meant that the decision making process, the ‘design energy’, often went undocumented4. This challenge was also noted in another evaluation of a HCD design process8. As a result, the process evaluation team needed to be flexible in order to align closely with the work plan of the implementers and methodologies such as direct participant observation were key to capturing the depth of the HCD process.
The potential for research fatigue was observed among target community members as their views were solicited by both the implementers designing the intervention and the process evaluation team interested in understanding the HCD process from the participants’ point of view. The process evaluation team, therefore, needed to balance the importance of capturing the views of community members with the potential for research fatigue.
During the design of the process evaluation, the intention was that the findings would feed into and inform the intervention design at key moments. However, there was limited uptake of process evaluation findings by implementers. For example, the process evaluation highlighted the need for the programme to do more to address broader community and social norms, but this finding had a limited impact on intervention design. Poor uptake of findings was partly a result of the fast-paced nature of A360 and the resultant demands on the implementers’ time which did not allow them sufficient time to pause and reflect on the process evaluation findings. Country implementing teams differed in how they engaged with external recommendations, with some teams receptive and willing to listen and adjust, while others were more protective of their ‘solutions’. Designers add value to public health as they take a different approach when addressing an issue, however, in the case of A360, a preference to work from a blank slate free of the constraints of public health evidence may have impacted on an ability to respond to the PE findings. Uptake of the process evaluation findings improved when the evaluation team introduced participatory action research activities which focused on questions that were important for the country teams (e.g. health care provider attitudes in Ethiopia24 and Nigeria25).
The cost-effectiveness study faced similar challenges as the outcome evaluation in terms of complicated and delayed development of the cost-effectiveness study protocols. Although all costing exercises face some unknowns about the intervention to be costed, the HCD process added a layer of uncertainty. Furthermore, because proponents hypothesized that it was the design process itself that would be the key factor in producing an effective (and cost-effective) intervention, an important challenge was both measuring the total design cost and isolating the cost of HCD specifically as HCD activities of the design process were tightly intertwined with the other A360 ‘lenses’. In addition, because A360 was creating a new trans-discipline approach, there was also interest in separating out the costs associated with the ‘creation’ of the A360 approach from the costs associated with implementing and scaling A360 in countries. Interviews were held with intervention staff in order to get a sense of the distribution of activities (and associated costs) across the study ‘lenses’ and between efforts to create versus implement A360. The implementing agencies planned continuous changes or tweaks to the intervention once it was up and running and so more frequent cost data draws were required during the scale-up period to capture how those change might affect the “production process” and associated costs.
Robust evaluation of a new and promising approach such as HCD is warranted yet challenging. Some of the challenges faced are not unique to interventions designed with HCD and will be recognisable by those who have led evaluations of programmatic interventions. In comparison to research-led studies, evaluation of programmatic interventions is associated with a reduced level of control and increased uncertainty. A challenge for evaluators is to find ways to deal with this uncertainty while still retaining scientific rigor.
We have the following recommendations for others who would like to evaluate programmes incorporating HCD:
1. Implementers should allow adequate time to participate in the process evaluation, as well as work with the process evaluation team to ensure that findings are timed to feed into key decisions.
2. The process evaluation team should maximise secondary analysis of data collected by the implementers, and joint data collection could be considered where additional data collection is needed and participant research fatigue is anticipated.
3. Like HCD, an iterative and adaptive process evaluation approach is required. In A360, the process evaluation paused after the pilot phase and our team worked with the A360 implementers and donors to develop evaluation questions that reflected the solutions that were being developed, reflecting an iterated A360 Theory of Change.
4. For the process evaluation, methodologies, such as direct observations, are instrumental to capturing the fast-paced, highly-iterative HCD process, and to understand the ‘design energy’ i.e. how decisions were made at key points in the design process.
5. To avoid a time-consuming and resource-intensive design process, future HCD-based initiatives should consider a phased evaluation approach:
• Conduct process evaluation during the HCD inspiration, ideation, and pilot phases.
• Wait until the implementers have a better understanding of the emerging programme and have finalised the target geographies, target population, and intended outcomes before planning an outcome evaluation and cost-effectiveness study.
• During the implementation phase, conduct a comprehensive process evaluation that can capture whether, how, and why the intervention changed during implementation.
The advantages of a phased approach need to be balanced against the disadvantages of lengthening of the time between implementation and the availability of evaluation findings.
No data are associated with this article.
This study was supported by the Bill & Melinda Gates Foundation (OPP1134172) and the Children’s Investment Fund Foundation.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We would like to thank Anne LaFond for helpful comments on an earlier draft of this letter.
Views | Downloads | |
---|---|---|
Gates Open Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for the Open Letter provided in sufficient detail?
Yes
Does the article adequately reference differing views and opinions?
Partly
Are all factual statements correct, and are statements and arguments made adequately supported by citations?
Yes
Is the Open Letter written in accessible language?
Yes
Where applicable, are recommendations and next steps explained clearly for others to follow?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Adolescent sexual health, monitoring and evaluation.
Is the rationale for the Open Letter provided in sufficient detail?
Yes
Does the article adequately reference differing views and opinions?
Partly
Are all factual statements correct, and are statements and arguments made adequately supported by citations?
Yes
Is the Open Letter written in accessible language?
Yes
Where applicable, are recommendations and next steps explained clearly for others to follow?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Family planning, adolescent health, public health, gynecology.
Is the rationale for the Open Letter provided in sufficient detail?
Yes
Does the article adequately reference differing views and opinions?
Yes
Are all factual statements correct, and are statements and arguments made adequately supported by citations?
Partly
Is the Open Letter written in accessible language?
Yes
Where applicable, are recommendations and next steps explained clearly for others to follow?
Yes
References
1. Sein MK, Henfridsson O, Purao S, Rossi M, et al.: Action Design Research. Management Information Systems Quarterly. 2011; 35 (1): 37-56Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Human-centered design, digital health, community health.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 2 (revision) 25 Sep 19 |
read | ||
Version 1 24 May 19 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Register with Gates Open Research
Already registered? Sign in
If you are a previous or current Gates grant holder, sign up for information about developments, publishing and publications from Gates Open Research.
We'll keep you updated on any major new updates to Gates Open Research
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)