Article Text
Abstract
Background Co-creation is seen as a way to ensure all relevant needs and perspectives are included and to increase its potential for beneficial effects and uptake process evaluation is crucial. However, existing process evaluation frameworks have been built on practices characterised by top-down developed and implemented interventions and may be limited in capturing essential elements of co-creation. This study aims to provide a review of studies planning and/or conducting a process evaluation of public health interventions adopting a co-creation approach and aims to derive assessed process evaluation components, used frameworks and insights into formative and/or participatory evaluation.
Methods We searched for studies on Scopus and the Health CASCADE Co-Creation Database. Co-authors performed a concept-mapping exercise to create a set of overarching dimensions for clustering the identified process evaluation components.
Results 54 studies were included. Conceptualisation of process evaluation included in studies concerned intervention implementation, outcome evaluation, mechanisms of impact, context and the co-creation process. 22 studies (40%) referenced ten existing process evaluation or evaluation frameworks and most referenced were the frameworks developed by Moore et al (14%), Saunders et al (5%), Steckler and Linnan (5%) and Nielsen and Randall (5%).
38 process evaluation components were identified, with a focus on participation (48%), context (40%), the experience of co-creators (29%), impact (29%), satisfaction (25%) and fidelity (24%).
13 studies (24%) conducted formative evaluation, 37 (68%) conducted summative evaluation and 2 studies (3%) conducted participatory evaluation.
Conclusion The broad spectrum of process evaluation components addressed in co-creation studies, covering both the evaluation of the co-creation process and the intervention implementation, highlights the need for a process evaluation tailored to co-creation studies. This work provides an overview of process evaluation components, clustered in dimensions and reflections which researchers and practitioners can use to plan a process evaluation of a co-creation process and intervention.
- Public Health
- Review
- Health systems evaluation
Data availability statement
All data relevant to the study are included in the article or uploaded as online supplemental information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
WHAT IS ALREADY KNOWN ON THIS TOPIC
There is a growing recognition of the value of process evaluation.
The absence of process evaluation frameworks built to suit the context of co-creation makes it unclear whether they are adequate for this specific context.
WHAT THIS STUDY ADDS
The results demonstrate a fragmented interpretation of process evaluation in the context of co-creation.
Most assessed process evaluation components relate to participation, context, experience of co-creators, impact, satisfaction and fidelity.
The majority of studies do not reference existing process evaluation frameworks, with the UK Medical Research Council Guidance being the most referenced framework.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
The study highlights the need to enhance existing process evaluation frameworks with additional characteristics and components relevant to co-creation.
The study suggests considering both the co-creation process and intervention implementation as interventions and conducting process evaluations for each.
The study recommends the use of formative evaluation.
Introduction
Co-creation is advocated as a means to develop solutions (e.g., an intervention to improve public health) which meet the needs and wishes of the population of interest and other relevant stakeholders, by embracing a collaborative approach of innovative problem-solving. This approach includes the involvement of a wide range of stakeholders throughout all phases of a project,1 from identifying or defining the problem to the project’s concluding stages2 to co-create effective and sustainable solutions that align with the needs and preferences of all relevant stakeholders.3 It has been considered a promising approach to increase the effectiveness and impact of public health interventions and to contribute to the closing of the implementation gap,4 particularly valuable in the context of marginalised communities.4 5
However, co-creation risks tokenistic and ineffective applications without a rigorous methodology.
Process evaluation especially has been regarded as crucial to contextualise, explain and increase the science behind public health interventions.6 Its understanding has evolved over time. In its early stage, it primarily involved the assessment of implementation through the analysis of quantitative process indicators for interpreting the results obtained from effectiveness studies. Later, there was increased recognition of the need for qualitative research alongside trials to place greater value on the context, acceptability of an intervention and implementation issues.7 This understanding of process evaluation is exemplified by the framework of Saunders et al,8 which focuses on capturing the intervention implementation aspects, such as fidelity to the protocol, the number of intervention activities implemented and topics intended covered, attendance rates, recruitment procedures and contextual factors that may have affected the intervention implementation.
Since 2010, process evaluation expanded its scope to include the exploration of mechanisms of impact. For instance, through the British Medical Research Council (MRC) guidance,9 authors propose understanding process evaluation as a way to not only report on intervention implementation but also as an opportunity to explore elements that may help to explain how a certain impact has been achieved. Process evaluation is described by the MRC guidance and recent studies as a way to assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes.10–12 It is defined to be applicable and valuable to the stages of intervention development and implementation.9
Applied to co-creation, an evaluation of the process is crucial both at the development stage (ie, co-creating the intervention) and at the implementation stage (ie, implementing the co-created intervention). At both stages, a process evaluation can serve as a way to identify areas of improvement, ensure that the diverse perspectives and contributions of stakeholders are meaningfully integrated and that co-creators are experiencing a sense of joint ownership.13 It allows for the co-creation efforts to evolve and become more effective in addressing public health issues by meeting the needs and wishes of the communities and individuals involved.13 Despite being crucial to ensure a meaningful practice and an evidence-based assessment of the co-creation process and developed solution/intervention, no process evaluation framework has yet been designed explicitly for the context of co-creation. Being co-creation an underused yet emerging approach in public health,1 3 14 we observe a lack of evaluation frameworks that account for essential aspects in the co-creation process15 and that align with the most recent literature on co-created public health interventions.16 Despite being crucial to ensure a meaningful practice and an evidence-based assessment of the co-creation process and developed solution/intervention, no process evaluation framework has yet been designed explicitly for the context of co-creation.
For this reason, this review aims to explore how process evaluation is conceptualised, planned for and conducted in the context of co-creation, by providing an overview of process evaluation conceptualisations, used evaluation frameworks and components assessed at both the stages of development and implementation. It represents the background and exploratory work on the ways in which process evaluation is conducted in co-creation projects that will serve us to publish recommendations in our follow-up study.
Furthermore, several studies applying a co-creation approach have highlighted the importance of ensuring stakeholders’ perceptions and experience of the process are captured and guiding the intervention itself and/or adjustments and adaptation during the co-creation.17 18 This type of formative evaluation has been previously regarded as valuable in the context of co-creation and participatory research approaches.18–20 Hence, this review additionally aims to explore the extent to which included studies had planned for or conducted a formative evaluation, and, therefore, conducted, analysed or reported back evaluation results during the process to provide feedback to the co-creators and/or research team to adapt or improve the process.21
Finally, as engagement with the population of interest and stakeholders in co-creation processes is assumed to be happening throughout,2 in this study, we are interested in exploring the extent to which included studies planned for or conducted a participatory evaluation as part of the process evaluation. Participatory evaluation is described as a type of evaluation approach in which stakeholders are involved in the design of the evaluation, the data analysis or reporting.22
Overall, this study seeks to provide a review of studies planning and/or conducting a process evaluation of public health interventions adopting a co-creation approach and aims to derive assessed process evaluation components, used evaluation frameworks and to assess the extent to which studies conducted formative and/or participatory evaluation.
Methods
This research was conducted in two parts. First, we conducted a scoping review to identify frameworks and components used in the evaluation of a co-creation process and implementation of the related co-created interventions. Then, concept mapping23 was applied to identify a set of overarching dimensions to cluster the identified components.
Search strategy
This scoping review followed the PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) guidelines.24
We searched the Health CASCADE and SCOPUS databases with the same search strategy of including co-creat* OR co-creat* AND process AND evaluation. The Health CASCADE database is a recently published open-access database including peer-reviewed articles about co-creation across various fields.1 It was produced within the Health CASCADE project, a European-funded project aiming to develop the methodological foundation of evidence-based co-creation.25 The search on both databases was conducted with no time or language limitations. Following the database search, articles were exported into a CSV file to remove duplicates in Excel. The articles were then imported and screened in Rayyan.
Process of selection
All studies were doubled-screened by several reviewers at title and abstract (GRL, JdB, KG, DMA, LM and SC) and at full-text (GRL, JdB, KG, DMA, QA, TA, MV and MG-G) and irrelevant studies were removed against the agreed set of criteria. Differences of opinion regarding inclusion or exclusion were resolved by discussion and reaching consensus and, if not applicable, by the involvement of a third reviewer (GRL, JdB, KG, DMA, QA, TA, MV and MG-G).
Eligibility criteria
In line with the recommendations of Levac et al, 26 the criteria for study inclusion were refined through iterative discussion among the research team. Articles were included if they complied with the definition of co-creation intended as ‘an evidence-based methodology for the development, implementation and evaluation of innovations through continuous, open collaboration, interactional knowledge production and shared decision-making among key stakeholders, directed at improving public health’.27
We included studies that explicitly mentioned planning or conducting a process evaluation of (a) the co-creation process at any of the intervention/project stages (eg, the engagement with relevant stakeholders in the needs analysis; intervention development) and/or (b) the implementation of the co-created interventions (eg, how the co-created intervention was carried out and received, and examining its fidelity, quality and acceptability). Included studies related to the public health field, defined as all organised measures (whether public or private) to prevent disease, promote health and prolong life among the population as a whole.28 All studies included had to be empirical studies, that is, gathering data based on experience, observations or experimentation.29
Full inclusion criteria for title and abstract screening and full text can be found in online supplemental file 1.
Supplemental material
Data extraction
A template was developed in Excel to facilitate the extraction of information about included articles (see online supplemental file 2) and include data related to the definition of process evaluation, if applicable; frameworks used to guide the evaluation, if applicable; process evaluation components and on whether included articles conducted a formative evaluation21 or a participatory evaluation.22 We also extracted information related to the components assessed as part of the process evaluation. All data were independently extracted by two reviewers (GRL, JdB, KG, DMA, QA, TA, MV and JRZR), and, in case of discrepancies, MG-G and GRL were involved, and consensus was reached for the final extraction.
Supplemental material
Data analysis
To synthesise research findings related to the identified components, the extracted components were clustered by the first author (GRL) according to similarities. For instance, if we encountered components that were extracted and labelled as ‘facilitation’, we clustered these together with any related components that shared a similar thematic element, such as ‘facilitation of patients’ involvement’. In case of uncertainty, the last author (MG-G) was consulted.
In order to synthesise the identified components into a visually accessible format and to provide a structure to the results, we aimed to delineate a set of dimensions encompassing all individual components. To do so, all co-authors participated in three iteration rounds of consensus-making. First, to identify overall dimensions, co-authors were invited to independently group components and assign a name to each cluster via the online programme Trello.com. Each cluster would represent a dimension. Second, during an in-person meeting, using all dimensions that were drafted individually as a base, co-authors, as a group, sought consensus on a set of final dimensions.
Once dimensions were set, co-authors were asked individually to sort all components into the identified dimensions via the same online programme. We set a consensus threshold, which required that more than 50% of the co-authors must agree on the placement of each component within a specific dimension. More than 50% agreement was obtained for all sorted components.
Results
By reviewing and analysing included studies, this review provides an overview of how process evaluation was conceptualised and conducted in co-creation projects. It achieves this by describing included studies, frameworks used and any adaptations made to those frameworks and reporting on the assessed process evaluation components.
From the original total hit of 1882 articles, 119 duplicates were removed and 1615 were excluded at title and abstract screening. 79 articles were excluded after the full-text screening, resulting in the inclusion of 54 studies. The PRISMA extension for scoping review guidelines has been used to present the screening process (figure 1).
Overview of included studies
Online supplemental file 3 shows the included studies and details about the authors, publishing year, the country in which the study was set and specifies whether the study applied formative evaluation and/or participatory evaluation. The majority of included studies were conducted in the USA (18%), followed by Canada (7%), the Netherlands (5%) and the UK (4%), with the rest spread across various countries, each representing 1%–3% of the total. Studies primarily focused on obesity prevention (7%) and mental health (4%). Other topics included nutrition, physical activity, workplace wellness and various public health issues such as HIV prevention, breast cancer, drug use, occupational health and more (1%–3% of the total).
Supplemental material
All included studies were published between 2002 and 2022. There was an increase in publications between 2003 and 2017, with a peak of eight in 2016 and seven in 2017. Subsequently, from 2018 onward, there has been a continued growth in publications.
Formative and participatory evaluation
13 studies (24%) conducted formative evaluation during either the co-creation process or the intervention’s implementation and 37 (68%) conducted process evaluation after the intervention’s implementation (ie, summative evaluation). Two studies conducted a participatory evaluation (3%) while the remaining (97%) did not.
When it comes to formative evaluation, several authors identified potential implementation barriers and facilitators to support future adaptations or iterations of the intervention implementation.30–35 In some studies, the research team asked participants to reflect on perceptions related to the participants’ engagement36 or expectations,37 to adapt, if redeemed as necessary, the following intervention’s sessions, such as workshops and/or activities.36 38
Two studies conducted participatory evaluation in different forms. Gibbons et al 39 reported that, when presenting to the group, interested partners iteratively shared their thoughts, concerns and suggestions regarding the findings and the interpretation of the findings. More comprehensively, Harper et al 33 engaged with community representatives right from the planning of the process evaluation up to the choice of evaluation methods and strategies, in accordance with both community sensitivity and scientific rigour, up to the interpretation of findings.
Process evaluation conceptualisation
24 studies (44%) did not explicitly define process evaluation. 30 studies (56%) included an explicit definition of process evaluation within the manuscript, whether it was by referencing an existing study or by providing a definition themselves. Definitions of process evaluation provided by the included studies are available in online supplemental file 4, including extracted quotes.
Supplemental material
The manner in which process evaluation aims were described across studies provides insights into how the conceptualisation of process evaluation varied across studies. While the evaluation of intervention implementation is taken into account by the majority of the studies, several authors focused on other elements, including outcome-related evaluation,33 35 40 41 mechanisms driving impact,34 42–53 the contextual factors at play42 43 47 49 54–56 and the co-creation process.31 37–41 48 49 51 54 57–59
Several studies referred to process evaluation as the monitoring and reporting of intervention implementation and delivery. In these instances, process evaluation strives to paint ‘a clear, descriptive picture of the quality of the programme elements being put into place and what is taking place as the programme proceeds’.58 60 Parker et al,61 for instance, used process evaluation to gauge the extent, fidelity and quality of the intervention implementation. Similarly, Sormunen et al described process evaluation as ‘a process through which to report on structure and activities of the programme or intervention’.50
Four authors included an evaluation of outcomes as part of their process evaluation, defining this in several ways, including as a process in which you may analyse the ‘outcomes of the process used in the intervention’40 and ‘as a way to establish whether the partnership and project activities have been as intended and resulted in the expected outputs’.41 Magnusson et al described process evaluation as the procedure which ‘will monitor the processes in terms of reaching the intended outcome’35 while Harper et al described process evaluation’s goal as ‘to clarify anticipated outcome goals and criteria used in outcome evaluations that measure a programme’s relevance and accomplishments’.33
Authors in the included studies have also aimed to comprehend impact mechanisms as part of their process evaluation34 42–53. Studies conducted by Steckler and Linnan62 described process evaluation as the mechanisms that shed light on why some interventions produced the intended results, and why others did not. Studies citing Moore et al 9 stressed the importance of examining the nature of what was implemented in practice and understanding the context around the intervention outcomes to inform future programmes. In this light, process evaluation is said to allow ‘to draw inferences about future applicability in the current setting and about generalisability and transferability to other settings’.63 Anselma et al,64 among others, stressed process evaluations should help to gain a deeper understanding of the more and less effective elements of interventions, as well as facilitators and barriers to the intervention’s maintenance/sustainability.
The intention to capture mechanisms of impact ties in with the evaluation of contextual implementation barriers and facilitators. To try to capture mechanisms of impact, studies stressed the relevance of assessing contextual factors that may be influencing the co-creation process and intervention. Gathering insights about the intervention’s context, as part of the process evaluation, is seen as a way to ‘understand how and why the programmes work, and under what conditions’.46 Similarly, Palmer et al,65 citing Glasgow et al 66 described the process as the capturing of information about emerging barriers and facilitators to change implementation and to identify contextual (organisational and environmental) factors that affect the intervention.
Lastly, several studies refer to an evaluation of the co-creation process and related aspects when describing process evaluation.31 37–41 48 49 51 54 57–59 Fusari et al,43 for instance, highlighted the use of process evaluation as a way to learn about the engagement mechanisms of participants and stakeholders to unveil insights around impact mechanisms that may be necessary for scale-up. Tolma et al 58 included the intention, as part of their process evaluation, to evaluate stakeholders’ reactions, such as, for instance, ‘the level of participation among intended recipients to the programme and reactions of the intended recipients to the programme’.58 Greer et al 40 and Anselma et al 64 included the assessment of enabled capacity building and empowerment as a result of the engagement and as part of their process evaluation.
Frameworks used
Eight studies (14%)42 43 47 49 54–56 cited evaluation or process evaluation frameworks developed by Moore et al, 9 three studies (5%) 62 ,48 51 58 62 cited Steckler and Linnan,62 three (5%) 35 67 68 cited Saunders et al, 8 three (5%)63 69 70 cited Nielsen and Randall;55 two studies (3%)40 71cited Greer et al, 40 and two (3%)64 65 citied Glasgow et al, 66 one study (1%)34 cited Damschroder et al 72, one study (1%)73 Nielsen and Abildgaard,74 one study (1%)59 Rowe and Frewer75 and one study (1%)76 Grant et al. 77
Table 1 presents, in order of highest to lowest number of cited times, details of the frameworks that were used in the included studies to guide the process evaluation, including the modifications to the original framework.
Most studies adapted frameworks to include evaluation elements that refer to the co-creation process and related experience, perceptions with the implementation intervention and co-creation process.59 64 67 67 Additions to the MRC guidance9 included evaluation elements related to the participants’ experience of engaging in the co-creation process and/or intervention implementation. To the MRC guidance, Cedstrand et al 55 integrated Nielsen and Randal’s framework,69 while Fusari et al 43 included the use of the logic model.
To Nielsen and Randall’s framework,69 Yeary et al 51 included the assessment of acceptability and satisfaction with the intervention components and awareness of the intervention, while Tolma et al 58 further looked into barriers to intervention maintenance. Yeary et al 51 also added evaluation elements related to the acceptability of intervention components (satisfaction) and the intervention reach (awareness of the intervention).
Dimensions
Figure 2 presents a visual representation of identified process evaluation components clustered in overarching dimensions.
Process evaluation dimensions
Each dimension and component may apply to both the co-creation process and the implementation of the intervention.
‘Delivery’ components measured the degree to which the co-creation process and/or intervention implementation was delivered as intended. It includes the reporting of the number of co-creation and/or intervention sessions (e.g., workshops) delivered, the number of participants involved, etc and reports on changes concerning the original protocol. The dimension of ‘delivery’ encomp asses the following process evaluation components: delivery, dose delivered, adherence, adaptation, dose received, exposure and fidelity.
‘Participation’ includes components assessing the extent to which individuals or groups have engaged with and participated in the co-creation process and/or implemented intervention. It included components measuring the level of involvement and active engagement of the population of interest and/or end-users during the co-creation process and/or in the intervention, including the self-perceived degree of shared ownership and commitment. The latter may be observed and reported by facilitators and/or reported by participants. The dimension of ‘participation’ encompasses the following process evaluation components: participation, motivation, retention, facilitation, methods, partnership and recruitment.
‘Experience’ captures components measuring and assessing the subjective perception and evaluation of co-creation process and/or the implementation of the intervention by the individuals or groups who participated in it. It includes the assessment of (a) the experience related to the co-creation process and/or (b) the overall experience and involvement with the intervention implementation and actions. The dimension of ‘experience’ encompasses the following process evaluation components: acceptability, expectations, perceptions and satisfaction.
‘Context’ relates to components that are intended to examine the broader social, cultural, economic and political factors which create the system that can impact the success or failure of the intervention. The purpose of evaluating context might be to (a) understand the systemic factors which have influenced the public health issue that matters, (b) help ensure that the co-creation process and intervention is appropriately tailored to the specific context in which it is being implemented and (c) understand which environmental factors have had an impact on the co-creation process or intervention implementation. The dimension of ‘context’ encompasses the following process evaluation components: mapping, context, feasibility, readiness for change, support and resources.
‘Maintenance’ includes components that assessed the extent to which the intervention outcomes and/or relationship formed during the co-creation process and/or implementation of the intervention are being maintained. The dimension of ‘maintenance’ encompasses the following process evaluation components: maintenance, retention and future organisation.
‘Impact’ relates to components assessing the extent to which the co-creation process and/or implementation of the intervention has achieved one or more of its desired outcome(s) and its overall impact, including, for example, empowerment, self-reported or reported attitudes and/or changes towards the targeted health behaviour, self-perceived increase of well-being, awareness and satisfaction related to the participation in the process. The dimension of ‘impact’ encompasses the following process evaluation components: mechanisms of impact, impact, adoption, empowerment, capacity building, knowledge integration and evidence, communication, policy change and reach.
Process evaluation components
Among the most evaluated components are participation (26, 48%), context (22, 40%) and experience of co-creators (16, 29%), together with impact (16, 29%), satisfaction (14, 25%) and fidelity (13, 24%). Descriptions of each component are explicated below. Other components, in order of frequency of use, include the following: recruitment, reach, dose delivered, readiness for change, delivery, empowerment, motivation, dose received, support, capacity building, perceptions, maintenance, facilitation, communication, adherence, feasibility, exposure, adoption, adaptation, knowledge integration and evidence, resources, future organisation, policy-change, partnership, methods, expectations, acceptability and retention.
We describe below the most evaluated components (>23%), namely participation, context and experience of co-creators, impact, satisfaction and fidelity. A description of all components, as intended by the authors of the included studies, including the frequency of use, can be found in online supplemental file 5.
Supplemental material
Participation
26 studies assessed participation as part of their process evaluation, including the extent to which individuals or groups who were the target of the intervention engage with and participate in the co-creation process and/or implementation of the intervention. Studies assessed the nature and degree of participation,37 78–80 and more specifically, whether it was voluntary, that is, the extent to which there was a voluntary shift of responsibilities from providers to users80 or equitable, ensuring all experiences were listened to, respected and represented at the table.30 45 71 81 Some assessed the extent to which there was continued or early engagement of communities throughout the process,45 59 78 82 including whether the objectives were set out and agreed by stakeholders at the start of the process,45 whether they had the chance and time to discuss and continuously revise the action plans30 73 or whether participants agreed they were targeting the most important problems in the intervention.73 83
Studies also specifically measured the participants’ involvement in decision-making,82 participants’ feelings regarding the transparency of the process82 occurrence of joint actions to meet community needs,60 the extent to which participants feel joint ownership63 or shared responsibility for the intervention.70 Studies also assessed the perspectives of participants on the process70 84 and, specifically, as to whether they have felt involved in the intervention,63 have established a trustful and open relationship with the working team45 85 and how they perceived the impact or accomplishment of the engagement process.39 Clark and Laing86 assessed the value of knowledge of exchange while participating. den Broeder et al 87 looked at perceived factors facilitating or hindering the development of consensus and perceptions of the level of perceived consensus and actual consensus.
Other studies evaluated the benefits and barriers39 88 89 and implementation determinants related to the engagement process.79 Kelly and Van Vlaenderen78 focused on assessing the degree to which the communicative problematics of participation have been identified and dealt with in a project. Dennehy et al 90 used Lundy’s Model of Participation,91 to operationalise participation, focusing on the evaluation of perceptions related to the creation of an inclusive and safe space for children, facilitation, extent to which their views are listened to and acted on.
Context
22 studies reported an assessment of context as part of their process evaluation examining the broader social, cultural, economic and political factors impacting the success or failure of the intervention in a specific context.
Studies mostly evaluated the contextual factors that might impact or have impacted the intervention planning and implementation.42 51 67 68 73 A wide range of approaches to the definition of context were used. Reeve et al 49 assessed context as the larger social, political and economic environment that may influence the implementation of an intervention. Igel et al 47 included the evaluation of existing social, health and environmental issues while Schelvis et al 92 explored the organisational and the environmental characteristics that affect the intervention. Tolma et al 58 reviewed aspects related to the larger social, political and economic environment and Gensby et al 46 highlighted the importance of considering the political-administrative context in which rehabilitation programmes are practised. Robertson et al 56 focused on broader community and environmental factors, such as socioeconomic considerations and community participation.
Studies explored implementation barriers and enablers,31 45 58 93 94 some focusing specifically on existing organisational structures, professional values or sociopolitical context that enable successful implementation,95 96 environmental factors,30 resources available52 56 or events that occurred and influenced the content of the execution of the action plan.63 Beckerman-Hsu et al 76 also specifically looked at moderators and the extent to which their role impacts implementation.
Authors have also mapped the characteristics and distribution of a specific population or health issue in a particular geographical area. Authors identified, analysed and considered the systematic representation of relevant stakeholders,45 96 aimed to clarify context, processes and activities,96 to understand the community85 and to identify the contextual and procedural drivers of any wanted change.57
Experience
16 studies evaluated the experience of participants and assessed the subjective perception of individuals or groups who participated in the co-creation process and/or intervention implementation. The majority of the studies48 54–59 assessed overall experience and involvement with the implemented intervention and actions while others31 55 60 evaluated how the participants specifically experienced the participatory process, or the coordination and collaboration in the process.59
Impact
16 studies assessed impact-related measures related to the extent to which the intervention had achieved one or more of its desired outcome(s) and its overall impact. This included evaluating the impact of the intervention on the collaborative and equitable involvement of its members,97 patient health and well-being,98 employee engagement and participation in work,99 line manager attitudes and actions,92 and personal impact on advisory group members.90
Reeve et al 49 evaluated patients’ perceptions of the overall impact they perceived as a result of taking part in the intervention. Heggdal et al 98 specifically reported on whether the intervention had the intended effect on patient health and well-being and whether the intervention had prompted individuals to be more active or had led to changes in their health behaviours.83 84 92
Others have evaluated the institutional and organisational changes taking place among and beyond the group of participants57 92 99 and outcomes that were a result of the engagement process between several parties involved.61 79 100 Chrisman et al 60 assessed the concrete achievements of the intervention, such as the number of publications, programmes, evaluations and grants that have been produced.
Some studies focused on evaluating mechanisms of impact and examined how the intervention produced its intended outcomes. Some studies aimed to identify the specific causal mechanisms or pathways that linked the intervention to the observed changes in health-related behaviours, health outcomes or other targeted outcomes42 47 and one study specifically looked at factors and mechanisms which contributed to citizen participation and intersectoral collaboration.101
Satisfaction
14 studies assessed the level of satisfaction among the participants and/or end-users who received or participated in a co-creation process and/or public health intervention. The evaluation of satisfaction was assessed through the overall intervention, its design and implementation, partnership, research process, products, team building process and dialogues, as well as the progress of the co-creation group.
Satisfaction was evaluated in various aspects of the intervention, such as the overall intervention,50 63 67 84 97 102 design and implementation102 and more specifically, the partnership,97 the research process,97 products97 or team building process102 and dialogue103 and the progress of the co-creation group.84 Some studies assessed satisfaction with specific stages of the process, including satisfaction with the needs assessment phase and the developed action plan.63 Lelie et al 70 registered satisfaction with the appropriateness of tools and materials, intervention activities and intervention approach. Schelvis et al 92 aimed to capture satisfaction levels with the participatory process.
Fidelity
Fidelity was assessed in thirteen studies and refers to the process of measuring and assessing the extent to which an intervention was delivered as intended, according to the original programme design or protocol. Studies evaluated fidelity by determining whether the intervention was implemented consistently and faithfully across different settings and to identify any variations or adaptations that may have been made during implementation.32 42 46 51 55 58 61 63 67 68 92 104
Discussion
Broadening the scope of process evaluation for co-creation
The increased number of publications on process evaluations of co-creation projects included in the current review not only indicates a growing interest in the field but also a recognition of its potential benefits and relevance. However, the field of process evaluation in co-creation is to be researched further. As previous reviews recommend,105 106 it is yet to be understood why process evaluation frameworks are so scarcely applied. The results from the current review align with those of two separate reviews on the use of process evaluation by Lazo-Porras et al 105 and Liu et al 106 in chronic and neglected tropical diseases in low-income and middle-income countries and in primary care interventions addressing chronic disease. Both studies indicate a low percentage of included studies that reference existing frameworks in process evaluation (12% and 31%, respectively). Among recommendations for the use of process evaluation in the study by Lazo-Porras et al,105 was to standardise reporting to ensure consistency and comparability among studies.
Echoing the above-mentioned results and recommendation, the results of this review highlight the importance of addressing the need for a standardised process evaluation specifically designed for co-creation. Such evaluation should capture essential co-creation elements as part of the co-creation process as well as part of the implementation of the co-created solution. An evaluation of the co-creation process would need the inclusion of specific elements, such as an assessment of the active collaboration with the stakeholders, the experience, facilitation and levels of participation. The process evaluation carried out by the included study by Dyer et al 59 illustrates this by focusing in-depth on an evaluation of the engagement and participation of co-creators in the co-creation and implementation process. Authors include valuable evaluation elements which relate to the following aspects: (a) the early engagement of communities in the process; (b) identification, analysis and systematic representation of relevant stakeholders; (c) clear objectives set out and agreed by stakeholders at the start of the process; (d) continued engagement of communities throughout process; (e) relevant methods chosen and tailored to the context, (f) participants and level of engagement; (e) highly skilled facilitation of the process; (f) integration of local and scientific knowledge; (g) open and meaningful information exchange and interaction with face to face; (h) transparency, trust and fairness; (i) equality among stakeholders and (l) the competent management throughout process.
Most importantly, this review has surfaced a growing trend of bringing the co-creation process into the conceptualisation of process evaluation.31 37–41 48 49 51 54 57–59 Studies have done this by incorporating co-creation elements in existing process evaluation frameworks,59 64 67 67 including an assessment of experience34 48 49 54–56 70 107 108 and components related to participation.25 30 37 39 45 60 63 70 71 78 80 81 85 87 Placing value on the co-creation process and its evaluation might entail having to consider the co-creation process an intervention in itself, with its own impacts and process evaluation. An evaluation of the co-creation process might be crucial as strictly linked to the implementation of the co-created solution. Equally valuing the process of co-creation and intervention implementation may enable us to grasp a more complete picture and to explore the relation between the process which co-created the solution (e.g., intervention) and the implementation of the solution/intervention itself.
Participatory evaluation and formative evaluation
Despite participatory evaluation being considered a potentially recurring approach to process evaluation, very few studies have done so (3%). We speculated that this could be attributed to potential challenges associated with its implementation, including the additional time it may require from participants and the possibility that it may not be perceived as highly significant by the studies that have included it. More guidance might be needed on how to conduct participatory evaluation in a way that is relevant to the stakeholders and adherent to co-creation principles. One first step might be, as done in the included study by Anselma et al, 53 to share the effect and process evaluation plan and ask the population of interest, in this case children, to reflect on the proposed measures and to suggest potential additional evaluation outcomes or methods.
13 studies (24%) have been found to adopt a formative evaluation approach. Formative evaluation has been thought useful for the identification and resolution of potential issues that could hinder the intervention’s implementation and/or related solution development109 and as an opportunity to explore whether the intervention is addressing a significant need, using ongoing input for short-term adjustments and to detect and adjust, if needed, to unanticipated events and local adaptations.109
Formative evaluation, especially in the context of co-creation, has been considered valuable when pinpointing the population of interest and stakeholders’ feedback regarding the co-creation process, the implementation and tailoring implementation strategies.19 20 It may be particularly significant as a way to gauge stakeholders’ active participation and ensure their perspectives are comprehensively captured and integrated into the intervention and ensure a successful intervention18–20 and allow for the intervention implementation fine-tuning, ensure it is closely aligning with stakeholders’ insights, feedback and concerns.35
Formative evaluation may be considered a characteristic inherent to co-creation, as the process is considered highly iterative.13 This inherent iteration nature built within co-creation might represent a challenge when it comes to the evaluation of fidelity. A challenge might be faced if formative evaluation is either not reported, as this usually happens more informally, or avoided altogether, particularly in the case of well-controlled randomised trials, which may typically refrain from postapproval alterations.109 As co-creation adopts an approach which is receptive to stakeholders’ context and feedback, the intervention should not solely be reporting adherence to predetermined steps but also valuing and adapting, when possible, to the lived experience, knowledge and values of the co-creators.
To be able to measure the extent to which formative evaluation activities exert influence on the implementation, thoroughly reporting modifications becomes essential. It is, therefore important, in this respect, that the intention of formative evaluation is explicated and reasons for and applied modifications are reported, including why and how formative was collected, used, by whom and to what extent it was integrated in the modifications.109
Recommendations for future research
Through a search of the published literature, this is the first scoping review of process evaluations planned or conducted in the context of co-creation for public health. Findings from this study lead to several implications for the field of process evaluation for co-creation.
First, the incorporation of extra elements into existing process evaluation frameworks and focus on process evaluation components related to the co-creation process, such as experience, participation and satisfaction, suggests that the existing process evaluation frameworks may fall short in comprehensively evaluating the co-creation process. It is important also to recognise, as expressed throughout the manuscript, the importance authors have placed on components related to context and mechanisms of impact.
Second, placing a focus on the co-creation process may necessitate valuing the co-creation process as an intervention in itself. Equally, valuing both the co-creation process and the intervention implementation as distinct interventions and conducting process evaluations for both may help to provide a more comprehensive picture of co-creation.
Third, the high percentage of use of formative evaluation throughout included studies may suggest that this is key to the context of co-creation processes and may help account for the iterative nature of the approach and adapting the co-creation process and intervention to the co-creators’ lived experience, knowledge and values. Conversely, the limited use of participatory evaluation by included studies may suggest either a lack of relevance or constraints in its practical implementation. This scoping review is conducted as part of the Health CASCADE project and findings will be used to inform the development of further guidance on planning and evaluating co-creation for public health. The authors involved in the guidance development will expand on components identified, recommend methods for evaluation and include practical examples to support researchers and practitioners.
How to use this review?
We see this review as serving three distinctive objectives. First, to provide an overview of existing conceptualisations related to process evaluation and frameworks used to guide the planning of process evaluation for co-creation. Second, to identify process evaluation components that previous studies took into account, to get a sense of what was valued as part of their planned or conducted process evaluation of co-creation. Lastly, the review seeks to facilitate reflection on process evaluation components that researchers and practitioners could consider when planning for the process evaluation of co-creation in the field of public health.
Study limitations
First, the framework modifications detailed in table 1 stem from our subjective understanding of the components and may not have been explicitly reported as modifications in the included studies. Second, each identified process evaluation component described in online supplemental file 5 is presented as described by the authors of the included studies. No modifications have been made to the clustering and description of identified process evaluation components to portray accurately what had been done and how components were intended by the included authors. Finally, even though a >50% agreement sorting rule was set, some co-authors expressed the difficulty in placing individual components into one dimension as they felt some could have related to several dimensions.
For the reasons expressed above, it should be noted that review findings should not be seen as a source of expert advice on process evaluation, but rather considered as a synthesis of current practice which can help reflect on the planning for process evaluation in the context of co-creation. Furthermore, while almost all the co-authors found most process evaluation components to be applicable and relevant to both stages, some shared the challenge of thinking of the components without categorising them into the (a) co-creation process and (b) implementation of co-created solution/intervention. For the development of the process evaluation framework for co-creation planned as a follow-up study, although we anticipate some overlaps, we will explicitly refer to these two stages distinctively.
Lastly, it should be noted that authors used their discretion to determine inclusion or exclusion, based on their own judgement and consensus between reviewers. Hence, the decision on whether studies complied with the set definition of co-creation reviewers on the reviewers’ own perceptions. Reviewers included studies if they perceived them as complying with the definition of co-creation, which was based on the reviewers’ own perceptions. Any inconsistencies were discussed with the involvement of a third reviewer and, if needed, discussed with a broader group of reviewers for alignment.
Conclusion
This study offers an overview of process evaluation frameworks and components reported in studies conducting process evaluation of co-creation in public health. Results show a pluralistic understanding of process evaluation, which varies according to authors and refers to process evaluation concepts related to intervention implementation, outcome evaluation, mechanisms of impact, context and the co-creation process.
Alongside standard process evaluation components that relate to the intervention’s implementation, attention has been placed, by authors of included studies, on process evaluation components related to participation, context, experience of co-creators, together with impact, satisfaction and fidelity. The study, overall, encourages the adoption of a holistic perspective to process evaluation, encompassing elements that allow for an enriched understanding of the process and for a comprehensive evaluation and replication of effective and meaningful interventions. By highlighting important gaps in the field, the findings also serve to inform future methodological work and guidance development on process evaluation and can be used as guidance when planning for process evaluation.
Data availability statement
All data relevant to the study are included in the article or uploaded as online supplemental information.
Ethics statements
Patient consent for publication
Acknowledgments
Benedicte Deforche and Lea Delfmann for contributing to the concept mapping exercise.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Handling editor Valery Ridde
X @giulianalongi
Contributors GRL and MG-G developed the study concept, to which all authors provided critical feedback. Title and abstract screening were performed by GRL, SC, JdB, DMA, LM and KG and conflicts were resolved by GRL, TA, QA and MV. Full-text screening was performed by GRL, DMA, JdB, MV, TA, QA, KG and MG-G and conflicts resolved by GRL, KG, MV, DMA, JdB, MV, MG-G, QA and TA. Data extraction was conducted by GRL, TA, MV, KG, JdB, QA, DMA and JRZR. The first draft of the manuscript was prepared by GRL. The first round of the concept-mapping exercise was performed by GRL, AD, MG-G, TA and MV while the second was performed, in-person, by MG-G, DMA, MV, LM, JdB, KG and BD and LD. The former two, not listed as co-authors, have been thanked for participating in the acknowledgements. All authors contributed to the article and approved the submitted version. GRL will be acting as the as the guarantor for this study.
Funding This study has been funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement no 956501.
Disclaimer The views expressed in this paper are the author’s views and do not necessarily reflect those of the funders.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.