- Split View
-
Views
-
Cite
Cite
Peggy Mcnamara, Provider-specific report cards: a tool for health sector accountability in developing countries, Health Policy and Planning, Volume 21, Issue 2, March 2006, Pages 101–109, https://doi.org/10.1093/heapol/czj009
- Share Icon Share
Abstract
In most health care systems in most countries, providers are not adequately held accountable – by governments, purchasers, provider professional associations or civil society – for the quality of care. One approach to improve provider accountability that is being debated and implemented in a subset of developed countries and a smaller group of developing countries is provider-specific comparative performance reporting. This review discusses universal design options for report cards, summarizes the evidence base, presents developing country examples, reviews challenges and outlines implementation steps. The ultimate aim is to provoke thoughtful debate about if and how comparative performance reporting fits within a developing country's broader framework of strategies to promote quality of care.
Introduction
In most health care systems in most countries, providers are not adequately held accountable – by provider professional associations, consumers, governments or purchasers – for their performance. Particularly in under-funded systems, interest in assessing and improving quality is often crowded out at the national level by the seemingly more immediate or overarching need to assure coverage and access. Similarly perhaps, interest in assessing and improving quality is dwarfed at the provider level by the pressing need to assure financial viability. In contexts such as these, the discussion of monitoring and improving quality of care, unfortunately, is viewed too often as a luxury (Heiby 2002).
One approach being debated and implemented to improve provider accountability for quality, particularly in the US, is the provider report card. Interestingly, the concept of provider-specific reports has been around for a while.
I am fain to sum up with an urgent appeal for adopting this or some uniform system of publishing the statistical records of hospitals. If they could be obtained … they would show subscribers how their money was being spent, what amount of good was really being done with it, or whether the money was doing mischief rather than good.
Report cards – also called consumer reports, performance reports, provider profiles, quality assessment reports, score cards, citizen report cards and league tables – are not a panacea but instead represent one of a number of approaches that might have a place in a country or community's overarching quality strategy.
For the purpose of this review, provider report cards refer to any effort to compare providers within a specified geographic region (for which all providers are eligible to be included) on a routine basis, according to certain standards of quality performance. A principle aim of this review is to question a perception that report cards are an approach exclusive to developed countries. A second aim is to help frame relevant policy decisions and design options. The ultimate intent is to provoke thoughtful debate about if and how report cards fit within a developing country's set of strategies to promote quality of care alongside regulatory, payment and training initiatives.
In the first section, several report card classification schemes are presented. Next, the evidence base for report cards is summarized, and report card experiences of several developing countries are highlighted. The paper concludes with a discussion of comparative performance reporting challenges within a developing country context and an outline of decision points in designing and implementing a report card.
Methodology
A convenience sample of report cards was reviewed to develop a menu of fundamental design features. Developed country examples were identified from published and gray literature. Because the term ‘report card’ is largely absent from the lexicon in developing countries’ health sector literature, developing country examples, far fewer in number, were found through key informants having expertise in international quality improvement or citizenry's use of voice. Neither developed nor developing country examples are intended as all-inclusive compendia. Contextual details of report card examples are limited to information contained in the referenced literature.
Report card classifications
Report cards can be stratified in a number of ways – by whether they are intended for the public domain or not, by their sponsorship, and by whether provider inclusion is considered voluntary or mandatory. These alternative classification schemes are discussed below. While the discussion is illustrated with examples primarily from the US, because of its considerable and varied report card activity, the classifications have universal application and are used in a subsequent section to frame discussion of basic design options.
Public vs. private report cards
In public reports, providers are identified by name and the performance data are intended for the public domain. Public reporting is typically grounded in one or more of several beliefs. The first is that public reporting better enables competition based on quality. If a provider faces direct competition, then it has a strong incentive to perform because consumers can choose to go elsewhere (Slack and Savedoff 2001). The second is that public reporting facilitates citizenry use of ‘voice’ – public dialogue with and challenge to leadership (Paul 1992). Putting evidence in the public domain has the potential to change the nature of dialogue, making it difficult for legitimate actors to ignore problems (Murray 2003). If successful, public reporting empowers a broad section of civil society to ask why one provider has achieved considerably better performance than another, and why some providers choose to ignore sector standards (Kingdom and Jagannathan 2001). In this scenario, public reports serve as a proxy for the pressure of competition in a context where consumers realistically have few choices (Paul, undated). The third belief is that publication of performance ratings, even in the absence of an actual competitive effect or consumer engagement, triggers quality-improvement activity in part, perhaps, because providers want to be viewed favourably by their peers.
The State of New York sponsors one of the oldest public report card efforts in the US. The cardiac surgery reporting programme has, since 1989, annually published hospital-specific and surgeon-specific data on the technical outcome measure of risk-adjusted mortality following coronary artery bypass graft surgery (Chassin 2002).
Private reports, in contrast, share performance data only with providers. The aim is to present each individual provider with a confidential report comparing its performance on a range of quality indicators with, for example, a community average, a peer-group average or a normative benchmark. The identity of other providers – typically but not always – is blinded. Private reporting is done out of recognition that, in order to consider practice changes, providers first need information indicating that they are performing below a community average or accepted standard. Without such comparative information, providers tend to view their performance as average or above average. Private reports are intended to support internal quality improvement efforts.
The US Society of Thoracic Surgeons (STS) sponsors a nationwide private reporting effort. More than 500 hospitals and surgical group practices voluntarily contribute cardiac surgery data (e.g. medical record data on coronary bypass procedures) to STS's National Adult Cardiac Surgery Database and reporting system. Every 6 months participating providers receive a private report from STS that compares their process measures and risk-adjusted outcomes with regional and national averages (Society of Thoracic Surgeons 2003).
Sponsorship
National and regional governments, quasi-governmental organizations, health plans and other private-sector purchasers, provider professional associations, media and civil society can and do sponsor provider-specific reporting efforts. Interestingly but not surprisingly, reporting efforts sponsored by professional societies tend to be private. Efforts sponsored by governments, civil society and media, on the other hand, tend to be for the public domain. Health plan and other purchaser-sponsored report cards include both private and public efforts.
Table 1 lists a US or UK report card example for each sponsorship category, and indicates whether the particular effort is public or private.
Type of report sponsor . | Report card example . | Private vs. public report card . |
---|---|---|
Government as public guardian | US Rhode Island State Department of Health | Public |
Quasi-regulatory organization | US JCAHO | Public |
Employer (private purchaser) | US Leapfrog | Public |
Government as purchaser | US Centers for Medicare and Medicaid Services | Public |
Health plan | US Blue Cross Blue Shield of Illinois | Private |
Provider professional association | US Society of Thoracic Surgeons | Private |
Media | UK Dr Foster | Public |
Civil society | US Consumer Checkbook | Public |
Type of report sponsor . | Report card example . | Private vs. public report card . |
---|---|---|
Government as public guardian | US Rhode Island State Department of Health | Public |
Quasi-regulatory organization | US JCAHO | Public |
Employer (private purchaser) | US Leapfrog | Public |
Government as purchaser | US Centers for Medicare and Medicaid Services | Public |
Health plan | US Blue Cross Blue Shield of Illinois | Private |
Provider professional association | US Society of Thoracic Surgeons | Private |
Media | UK Dr Foster | Public |
Civil society | US Consumer Checkbook | Public |
Type of report sponsor . | Report card example . | Private vs. public report card . |
---|---|---|
Government as public guardian | US Rhode Island State Department of Health | Public |
Quasi-regulatory organization | US JCAHO | Public |
Employer (private purchaser) | US Leapfrog | Public |
Government as purchaser | US Centers for Medicare and Medicaid Services | Public |
Health plan | US Blue Cross Blue Shield of Illinois | Private |
Provider professional association | US Society of Thoracic Surgeons | Private |
Media | UK Dr Foster | Public |
Civil society | US Consumer Checkbook | Public |
Type of report sponsor . | Report card example . | Private vs. public report card . |
---|---|---|
Government as public guardian | US Rhode Island State Department of Health | Public |
Quasi-regulatory organization | US JCAHO | Public |
Employer (private purchaser) | US Leapfrog | Public |
Government as purchaser | US Centers for Medicare and Medicaid Services | Public |
Health plan | US Blue Cross Blue Shield of Illinois | Private |
Provider professional association | US Society of Thoracic Surgeons | Private |
Media | UK Dr Foster | Public |
Civil society | US Consumer Checkbook | Public |
Voluntary or mandatory
Reporting can be voluntary or mandatory. Advocates of public reporting tend to support mandatory reporting, recognizing that under a voluntary public reporting system, poor performers will likely opt out. Evidence gives credence to this concern (McCormick et al. 2002). In contrast, private reporting, for example sponsored by provider professional associations, which is motivated by interest in supporting internal quality improvement efforts, is workable within a voluntary context.
The decision about voluntary vs. mandatory participation is interrelated to the type of data being used, and whether it can be obtained without the permission of the provider. To the extent that providers must consent to the collection of data used in a report card, i.e. in the absence of a regulatory requirement or purchaser mandate, report cards are likely to be voluntary. Some voluntary report sponsors, such as the US Centers for Medicare and Medicaid Services with its voluntary hospital quality initiative, pay providers a reporting bonus to increase provider participation.
Evidence
The appeal of report cards rests in their potential to promote accountability for quality. The theory is that report cards might impact quality by influencing provider behaviour, consumer behaviour or both.
There is some evidence that reporting encourages providers to improve their quality of care. An analysis of a national private reporting effort to track and improve hospital quality, sponsored by the US Centers for Medicare and Medicaid Services through its contracts with 53 Quality Improvement Organizations, suggests that performance along most of the measured indicators improved from the period 1998–99 to 2000–01. The proportion of Medicare patients receiving appropriate care, as measured according to 22 standards, improved from 70% to 73%, on average (Jencks et al. 2003).
The initial (1989) New York State cardiac surgery public report revealed wide variation in mortality rates among providers.1 After its publication, lower-rated hospitals responded by improving their cardiac surgery programmes, after which state-wide mortality fell substantially. Overall, risk-adjusted coronary artery bypass graft (CABG) mortality fell 41% state-wide in the first 3 years of the reporting system. One of the poorest performing hospitals implemented a number of organizational changes, for example installing a dedicated cardiac anaesthesia service. This hospital, in 2002, achieved the distinction of having the lowest risk-adjusted mortality of any hospital in the State (Chassin 2002).
1Some of the text in the following three paragraphs is drawn from McNamara (2005b).
A recent seminal study tracked the number of hospital quality improvement activities across three scenarios: hospitals that are not part of any reporting effort; hospitals whose performance is privately reported; and hospitals whose performance is publicly reported. Hospital quality improvement activities were least frequent among those not participating in any reporting effort, and most frequent among hospitals whose performance is publicly reported (Hibbard et al. 2003). The second phase of this study found that quality improvement, as measured by obstetrics performance, was most dramatic among hospitals whose performance was publicly reported and least frequent among hospitals with no report card activity (Hibbard 2005).
Interestingly, there is little US-generated evidence that stakeholders other than providers are influenced by performance reports. In the case of the New York State cardiac surgery report, only the cardiac care providers themselves were found to use the data. Patients did not change their care-seeking patterns to avoid high-mortality hospitals (Chassin 2002). This is consistent with several syntheses – most US-generated – that conclude that publicly disclosed comparative data has had little to no impact on consumers’ selection of providers (Marshall et al. 2000; Schauffler and Mordavsky 2001; Schneider and Lieberman 2001; Mehrotra et al. 2003). Any positive benefits of report cards in the US, it seems, are the result of providers’ responses independent of any consumer action.
But every report card does not result in quality improvement. The potential for report cards to improve quality depends on a number of contextual factors that affect the design, implementation and use of report cards. Key contextual factors include cultural characteristics (e.g. literacy rates, corruption indices, consumerism), health care market attributes (e.g. purchaser mix, provider supply) and information system capacity, to name a few. Some of these factors and others are discussed later in the review.
Report card experience in developing countries
While most report card examples are from developed countries and in particular the US, a few developing countries have experience with comparative performance reporting. Most, however, is not done on a routine basis.
A notable and pioneering public report card example is that of Uganda. The Yellow Star Program, sponsored by the Ministry of Health in collaboration with donor organizations, evaluates health care facilities on a quarterly basis using 35 indicators. While not referred to as a report card, the Program meets the definition of report card used in this review; it compares providers within a specified region (in which all providers are eligible to be included) on a routine basis according to certain standards of quality performance. Indicators span technical and interpersonal domains, and include standards for infrastructure, management systems, infection prevention, health education and interpersonal communication, clinical skills and client services. Ratings are made available in a general way to the community; facilities receiving a 100% score for two consecutive quarters are awarded a yellow star, which is then posted prominently on the outside of each recognized facility for the community to see. A yellow star can be removed subsequently if performance falters.
Echoing US-based research findings, a preliminary evaluation of the scores from the early implementation sites in Uganda indicates that the average score climbed from 47% for the first quarter to 65% for the second. Initially implemented in 12 of the country's 56 districts, plans to take the Program nationwide are underway (Uganda DISH 2004).
Most developing country experience with comparative performance ratings is represented by citizen report cards or government score cards, which have been developed and supported by civil society over the last decade. Initiated in Bangalore, India in 1993, government score cards survey the citizenry to quantify perceptions of the quality and effectiveness of various local government services, including health care services, and create public domain reports (World Bank 2003). The aim is to improve agency accountability by increasing public awareness and generating community pressure for service delivery improvements. Satisfaction on specific dimensions, such as behaviour of staff, problem resolution rates, quality of service and adequacy of information provided, is queried and tabulated. Individual agencies are ranked by average satisfaction scores, percentage of users that are satisfied, and percentage of users that are dissatisfied. The rankings are then disseminated to the media and to community groups (Paul, undated).
An evaluation of the impact of the Bangalore score card found that it succeeded in creating increased public awareness and generated a new confidence among citizens that collective action was feasible. As a result, some agencies developed initiatives to respond to unsatisfactory score-card ratings, such as creating their own customer satisfaction monitoring systems, convening public meetings to solicit additional consumer input and sponsoring employee training workshops. An evaluation survey of 100 citizens found that 93% agreed that awareness of public service problems increased and 69% observed service improvements (Paul, undated). A second report card, done in 1999, found that public services to some extent had improved in the intervening 5 year period, although for most government services, less than 50% of the respondents were satisfied (World Bank 2003).
The Bangalore score card has been replicated in other cities in India and other countries, including Ghana, the Philippines, Uzbekistan and Ukraine (World Bank 2001, 2003). Score cards exclusively targeting health sector performance have been developed, for example, in Bangalore and in the Philippines. They rate the frequency of visits for which doctors were present, length of wait before seeing the doctor, patients’ levels of satisfaction with respect to doctor behaviour, and patients’ levels of satisfaction with respect to quality of care; and probe reasons for any dissatisfaction (Balakrishnan 2003).
Public report cards also have been used in the environmental sector in a number of developing countries. This experience represents another important body of evidence of potential relevance to health sector report cards.2
2Public reporting of environmental sector performance, being pursued by a number of developing countries and communities, represents an important body of experience relevant to the discussion of health sector report cards. Faced with widespread violation of pollution prohibitions, for example, Sao Paulo in 1991 began public reporting of the violators. As a result of the public reporting and imposition of fines, 95% of the violators installed waste treatment units. Similarly, Indonesia's environmental protection agency began publicly rating industry compliance with environmental standards, which brought about spectacular improvements in pollution abatement. According to advocates of environmental performance reports, exposing the worst performers has proven to be a powerful way of pressuring companies to provide better services. By focusing political attention on service quality, benchmarking can also help to shield regulators from political interference (Kingdom and Jagannathan 2001).
Design features
Compared with the US, health sector report cards in developing countries are relatively homogenous. The typical model is a mandatory, public domain report card sponsored by government or civil society. None of those identified, for example, are sponsored by private purchasers or professional societies. Perhaps this is due to the relatively smaller role of private purchasers in developing countries, and the lack of well-resourced professional societies. Nor were any private or voluntary models found, perhaps because they tend to be associated with professional societies.
Interestingly, while developed country examples primarily focus on care provided in hospitals, developing country examples, such as those in Uganda and Bangalore, extend their reach to include primary care providers. Developed countries’ relative focus on hospitals, at least in the case of the US, is not for lack of interest in primary care quality. Instead it may be due, in part, to the fact that few purchasers, who are key proponents of report cards in the US, represent more than a fraction of any single primary care physician's case load. Another possible explanation is that hospital claims data, which are the data source for certain technical quality measures that can be used in report cards, are publicly accessible in some states. In contrast, physician claims data are not as readily available.
Developing country report cards do vary in terms of the number and type of quality indicators that are included. The most comprehensive reporting effort found was that of Uganda, with its 35 performance standards.
Standards being used to rate provider performance span technical and interpersonal dimensions of quality, and include structure, process and outcome attributes.3Table 2 presents a sample of quality indicators found in the report cards of Uganda and Bangalore.
3Donabedian identifies two basic elements of quality performance: technical and interpersonal performance. Technical performance depends on clinical knowledge and judgement used in arriving at an appropriate strategy of care and on the skill in implementing that strategy. Interpersonal performance, the vehicle by which technical care is implemented, depends on the management of processes that relate to privacy, confidentiality, informed choice, concern, empathy, honesty, tact and sensitivity. Information from which inferences can be drawn about quality of care – be it technical quality or interpersonal quality – can be grouped into three categories: structure, process and outcomes. Structure refers to attributes of the setting in which care occurs. For technical quality, this includes attributes of material resources (e.g. facilities, equipment, supplies), of human resources (e.g. number and qualifications of personnel), and of organizational structure (e.g. infection control system, staff payment system). For interpersonal quality, structure indicators include, for example, complaint registries, satisfaction surveys, ombudsman programmes. Process refers to what is being done in giving care. For technical quality, process includes provider activities in making a diagnosis and implementing treatment. For interpersonal quality, process includes, for example, provider practices to involve patients in decision-making about their care. Outcome, for technical quality, refers to the effect of care on the health status of the patient. For interpersonal quality, outcome refers, for example, to patient satisfaction level, bypass patterns, waiting times (Donabedian 1988.).
. | Examples of quality standards used in Uganda and Bangalore report cards . |
---|---|
Technical quality – structural standard | Availability of water, Uganda (Uganda DISH 2004) |
Waste disposal mechanisms, Uganda (Uganda DISH 2004) | |
Drug stock management procedures, Uganda (Uganda DISH 2004) | |
Client registries, Uganda (Uganda DISH 2004) | |
Containers for needle disposal, Uganda (Uganda DISH 2004) | |
Frequency of visits for which doctor was present, Bangalore (Balakrishnan 2003) | |
Technical quality – process standard | Adherence to guidelines for monitoring growth of children, Uganda (Uganda DISH 2004) |
Management of malaria cases, Uganda (Uganda DISH 2004) | |
Interpersonal quality – outcome standard | Waiting time, Uganda (Uganda DISH 2004) and Bangalore (Balakrishnan 2003) |
Patient privacy, Uganda (Uganda DISH 2004) | |
Patient's level of satisfaction with doctor, Bangalore (Balakrishnan 2003) |
. | Examples of quality standards used in Uganda and Bangalore report cards . |
---|---|
Technical quality – structural standard | Availability of water, Uganda (Uganda DISH 2004) |
Waste disposal mechanisms, Uganda (Uganda DISH 2004) | |
Drug stock management procedures, Uganda (Uganda DISH 2004) | |
Client registries, Uganda (Uganda DISH 2004) | |
Containers for needle disposal, Uganda (Uganda DISH 2004) | |
Frequency of visits for which doctor was present, Bangalore (Balakrishnan 2003) | |
Technical quality – process standard | Adherence to guidelines for monitoring growth of children, Uganda (Uganda DISH 2004) |
Management of malaria cases, Uganda (Uganda DISH 2004) | |
Interpersonal quality – outcome standard | Waiting time, Uganda (Uganda DISH 2004) and Bangalore (Balakrishnan 2003) |
Patient privacy, Uganda (Uganda DISH 2004) | |
Patient's level of satisfaction with doctor, Bangalore (Balakrishnan 2003) |
. | Examples of quality standards used in Uganda and Bangalore report cards . |
---|---|
Technical quality – structural standard | Availability of water, Uganda (Uganda DISH 2004) |
Waste disposal mechanisms, Uganda (Uganda DISH 2004) | |
Drug stock management procedures, Uganda (Uganda DISH 2004) | |
Client registries, Uganda (Uganda DISH 2004) | |
Containers for needle disposal, Uganda (Uganda DISH 2004) | |
Frequency of visits for which doctor was present, Bangalore (Balakrishnan 2003) | |
Technical quality – process standard | Adherence to guidelines for monitoring growth of children, Uganda (Uganda DISH 2004) |
Management of malaria cases, Uganda (Uganda DISH 2004) | |
Interpersonal quality – outcome standard | Waiting time, Uganda (Uganda DISH 2004) and Bangalore (Balakrishnan 2003) |
Patient privacy, Uganda (Uganda DISH 2004) | |
Patient's level of satisfaction with doctor, Bangalore (Balakrishnan 2003) |
. | Examples of quality standards used in Uganda and Bangalore report cards . |
---|---|
Technical quality – structural standard | Availability of water, Uganda (Uganda DISH 2004) |
Waste disposal mechanisms, Uganda (Uganda DISH 2004) | |
Drug stock management procedures, Uganda (Uganda DISH 2004) | |
Client registries, Uganda (Uganda DISH 2004) | |
Containers for needle disposal, Uganda (Uganda DISH 2004) | |
Frequency of visits for which doctor was present, Bangalore (Balakrishnan 2003) | |
Technical quality – process standard | Adherence to guidelines for monitoring growth of children, Uganda (Uganda DISH 2004) |
Management of malaria cases, Uganda (Uganda DISH 2004) | |
Interpersonal quality – outcome standard | Waiting time, Uganda (Uganda DISH 2004) and Bangalore (Balakrishnan 2003) |
Patient privacy, Uganda (Uganda DISH 2004) | |
Patient's level of satisfaction with doctor, Bangalore (Balakrishnan 2003) |
A broader look at developing country initiatives to assess quality of care at the provider level yields a richer menu of measures that could be considered in designing a report card. As indicated in Table 3, Cambodia, Haiti, Costa Rica and Nicaragua measure and monitor quality performance at the provider level, although these initiatives currently do not include a comparative reporting component.
. | Examples of primary or preventive care quality indicators . | Examples of hospital quality indicators . |
---|---|---|
Technical quality – structural standard | Adequacy of equipment, records and supplies related to quality immunization, Cambodia (Fronczak et al. 2000) | Establishment of an internal hospital quality committee, Costa Rica (Cercone et al. 2000) |
Existence of a commission to analyze maternal and infant deaths and to establish intervention plan, Costa Rica (Abramson 2001) | Unit-dose distribution system, Costa Rica (Cercone et al. 2000) | |
Availability of family planning supplies, Haiti (Eichler et al. 2001) | Protocols for the prevention of nosocomial infections, Costa Rica (Cercone et al. 2000) | |
Technical quality – process standard | Application of care protocols, Costa Rica (Abramson 2001) | Delivery complication rate, Costa Rica (Cercone et al. 2000) |
Interpersonal quality – structural standard | Existence of a consumer suggestion and resolution system, Costa Rica (Abramson 2001) | Application and reporting of a consumer satisfaction survey, Costa Rica (Cercone et al. 2000) |
Application of a user satisfaction instrument, Costa Rica (Abramson 2001) | Linkage with a Consultative Council, made up of local civil representatives, to facilitate provider to patient communication, Nicaragua (Jack 2003) | |
Interpersonal quality – outcome standard | Average waiting time for attention to children, Haiti (Eichler et al. 2001) | Rates of complaints, Nicaragua (Jack 2003) |
Average waiting time for surgery, Costa Rica (Cercone et al. 2000) |
. | Examples of primary or preventive care quality indicators . | Examples of hospital quality indicators . |
---|---|---|
Technical quality – structural standard | Adequacy of equipment, records and supplies related to quality immunization, Cambodia (Fronczak et al. 2000) | Establishment of an internal hospital quality committee, Costa Rica (Cercone et al. 2000) |
Existence of a commission to analyze maternal and infant deaths and to establish intervention plan, Costa Rica (Abramson 2001) | Unit-dose distribution system, Costa Rica (Cercone et al. 2000) | |
Availability of family planning supplies, Haiti (Eichler et al. 2001) | Protocols for the prevention of nosocomial infections, Costa Rica (Cercone et al. 2000) | |
Technical quality – process standard | Application of care protocols, Costa Rica (Abramson 2001) | Delivery complication rate, Costa Rica (Cercone et al. 2000) |
Interpersonal quality – structural standard | Existence of a consumer suggestion and resolution system, Costa Rica (Abramson 2001) | Application and reporting of a consumer satisfaction survey, Costa Rica (Cercone et al. 2000) |
Application of a user satisfaction instrument, Costa Rica (Abramson 2001) | Linkage with a Consultative Council, made up of local civil representatives, to facilitate provider to patient communication, Nicaragua (Jack 2003) | |
Interpersonal quality – outcome standard | Average waiting time for attention to children, Haiti (Eichler et al. 2001) | Rates of complaints, Nicaragua (Jack 2003) |
Average waiting time for surgery, Costa Rica (Cercone et al. 2000) |
. | Examples of primary or preventive care quality indicators . | Examples of hospital quality indicators . |
---|---|---|
Technical quality – structural standard | Adequacy of equipment, records and supplies related to quality immunization, Cambodia (Fronczak et al. 2000) | Establishment of an internal hospital quality committee, Costa Rica (Cercone et al. 2000) |
Existence of a commission to analyze maternal and infant deaths and to establish intervention plan, Costa Rica (Abramson 2001) | Unit-dose distribution system, Costa Rica (Cercone et al. 2000) | |
Availability of family planning supplies, Haiti (Eichler et al. 2001) | Protocols for the prevention of nosocomial infections, Costa Rica (Cercone et al. 2000) | |
Technical quality – process standard | Application of care protocols, Costa Rica (Abramson 2001) | Delivery complication rate, Costa Rica (Cercone et al. 2000) |
Interpersonal quality – structural standard | Existence of a consumer suggestion and resolution system, Costa Rica (Abramson 2001) | Application and reporting of a consumer satisfaction survey, Costa Rica (Cercone et al. 2000) |
Application of a user satisfaction instrument, Costa Rica (Abramson 2001) | Linkage with a Consultative Council, made up of local civil representatives, to facilitate provider to patient communication, Nicaragua (Jack 2003) | |
Interpersonal quality – outcome standard | Average waiting time for attention to children, Haiti (Eichler et al. 2001) | Rates of complaints, Nicaragua (Jack 2003) |
Average waiting time for surgery, Costa Rica (Cercone et al. 2000) |
. | Examples of primary or preventive care quality indicators . | Examples of hospital quality indicators . |
---|---|---|
Technical quality – structural standard | Adequacy of equipment, records and supplies related to quality immunization, Cambodia (Fronczak et al. 2000) | Establishment of an internal hospital quality committee, Costa Rica (Cercone et al. 2000) |
Existence of a commission to analyze maternal and infant deaths and to establish intervention plan, Costa Rica (Abramson 2001) | Unit-dose distribution system, Costa Rica (Cercone et al. 2000) | |
Availability of family planning supplies, Haiti (Eichler et al. 2001) | Protocols for the prevention of nosocomial infections, Costa Rica (Cercone et al. 2000) | |
Technical quality – process standard | Application of care protocols, Costa Rica (Abramson 2001) | Delivery complication rate, Costa Rica (Cercone et al. 2000) |
Interpersonal quality – structural standard | Existence of a consumer suggestion and resolution system, Costa Rica (Abramson 2001) | Application and reporting of a consumer satisfaction survey, Costa Rica (Cercone et al. 2000) |
Application of a user satisfaction instrument, Costa Rica (Abramson 2001) | Linkage with a Consultative Council, made up of local civil representatives, to facilitate provider to patient communication, Nicaragua (Jack 2003) | |
Interpersonal quality – outcome standard | Average waiting time for attention to children, Haiti (Eichler et al. 2001) | Rates of complaints, Nicaragua (Jack 2003) |
Average waiting time for surgery, Costa Rica (Cercone et al. 2000) |
Cautions
Provider-specific reports are not without challenges.4 Low operational autonomy of providers, as restricted by government rules, can limit providers’ ability to respond to shortcomings (Langenbrunner and Liu 2005), such as those revealed by report cards. Similarly, civil service or union rules can restrict a manager's ability to institute changes to correct deficiencies.
4Some of the text in the following three paragraphs is drawn from McNamara (2005a).
Lack of timely and routine information systems and sponsor's capacity to monitor quality indicators also pose report card challenges (Langenbrunner and Liu 2005). Efforts to develop and maintain routine data collection and information systems can be prohibitively costly, particularly in under-resourced systems.
There is also a concern that providers will ‘perform to the measures’, i.e. they will focus exclusively on the subset of care that is being measured and ignore the quality of the larger portion of care that is not being measured. In the case of public report cards, some fear that poor performers may use a low rating as an opportunity to request additional resources (Kingdom and Jagannathan 2001). Lastly, needed backing by politicians, at least in the case of government-sponsored report cards, may be difficult in the absence of a countervailing swell of support from consumer activist groups given that politicians are often loathe to endorse initiatives with explicit winners and losers.
Steps in designing a report card
Below are eight decision points, not necessarily in order of chronology, associated with implementing a report card.
Step 1: Determine if the report card will be private or public
Public reporting is appropriate if the primary aim is to better enable competition based on quality or to facilitate public dialogue on quality. Private reporting is appropriate if the aim is to better enable providers to understand their own performance vis à vis the community average or an accepted normative standard. Private reporting has an advantage of being more appealing to the provider community, but US research (Hibbard et al. 2003; Hibbard 2005) suggests it has a smaller effect on quality improvement. A report sponsor may opt to begin with a private report, and over time transition in whole or in part to a public report.
Step 2: Determine if provider participation will be mandatory or voluntary
Intertwined with the decision of private versus public reporting is whether reporting will be mandatory or voluntary. Public reporting works best with a mandatory participation requirement, otherwise low-scoring providers can opt out. Private reporting, however, is workable within both mandatory and voluntary contexts. Some sponsors, for example because of their mission or inability to access the needed data without the cooperation of the provider community, are not in a position to mandate reporting.
Step 3: Select quality measures
Perhaps the most formidable and critical decision is determining what measures to include in a report card effort. Box 1 provides some considerations.
Work with consumers, providers and quality measurement experts in selecting measures. The measures in Tables 2 and 3 may provide a starting point for discussion among stakeholders.
Select indicators deemed to be valid and reliable at the provider-specific level. As predictors of quality, many view structural standards as necessary but insufficient. Prior to choosing any structural or process indicator, there must be evidence that causally links it to a desired outcome. Interestingly, while many consider outcomes to be the gold standard of quality measurement, Donabedian (1988) makes the point that neither process nor outcomes measures are inherently superior.
Select measures that can be tracked and monitored feasibly within constraints of data collection and information system capacities. This incorporates considerations of collection and monitoring costs and the reasonableness of measurement burden. Quality measurement must not pose an excessive burden on any of the parties (Medicare Payment Advisory Commission 2003). Structural attributes tend to be the easiest to assess. Process measures tend to be time-consuming to track, as they often require direct observation or medical chart review and abstraction. Outcomes, at least technical outcomes, pose particular measurement challenges. For example, many outcomes are delayed (i.e. they occur after the provider–patient encounter) and so information about them is not easy to obtain (Donabedian 1988). Outcomes are influenced by factors other than provider performance, and as a result, outcome indicators require sophisticated risk-adjustment methodologies, which in turn require sophisticated data information systems. Further, certain outcome measures can be particularly infeasible to assess at the level of the individual physician because of the infrequency of the outcome. Interpersonal outcome measures, in contrast, avoid some of these challenges.
To the extent risk adjustment is used (i.e. for technical outcome measures), avoid a proprietary formula that cannot be made available to participating providers. Measure transparency is critical.
Include measures that are actionable by the provider. This has two requirements. First, a remedial course of action must be apparent for providers receiving a low rating. Structure and process measures have the advantage of being directly actionable by providers. Technical outcomes, on the other hand, are influenced by a number of factors, making it less obvious what went wrong in the case of a bad outcome. Secondly, the course of action must be within the provider's authority to influence.
Include both technical and interpersonal performance measures.
Choose measures for which there is wide variation in performance. It is fruitless to devote resources to measurement of aspects of quality that are not problematic. Include indicators that reflect quality concerns in the local community.
Include measures that cut across diagnoses and treatments to offset a provider tendency to take expedient steps to improve only measured performance.
Avoid selecting measures that might discourage providers from treating complex patients (Medicare Payment Advisory Commission 2003).
Explore the potential to coordinate with other potential sponsors. For example, the US Centers for Medicare and Medicaid Services and the US Joint Commission for Accreditation of Healthcare Organizations made a deliberate attempt to coordinate their selection of measures, so as not to overburden hospitals with multiple measurement schemes and to concentrate – rather than fragment – voice for quality improvement.
Step 4: Determine the frequency of reporting
Some assessments have failed to make a lasting impact because they were one-time events. Incentives for reform and improvement are more likely to increase if service providers know they will be monitored again (World Bank 2003).
Step 5: Develop the plan to collect data
Intertwined with several other decision points, the method of data collection must be specified, such as facility self-reported surveys, on-site facility surveys, household surveys or patient surveys, all of which are feasible in developing countries.5 Other data collection methods, such as facility staff surveys, have also been proven feasible in developing countries. Standards for data definition and data transmission protocols must be detailed. Survey instruments need to be developed and tested. Data collection decisions need to incorporate capacity and cost considerations.
5McNamara (2003) reports on a review of a small convenience sample of four facility surveys, which suggests they are particularly adept at capturing data on technical structure and interpersonal structure, and a review of a small convenience sample of four household surveys, which suggests they are suited to providing information on a broader range of technical and interpersonal performance.
Step 6: Develop the plan for data verification
In the case of self-reported information for public report cards, audits or spot checks are needed to verify accuracy.
Step 7: Develop the dissemination strategy
Dissemination plans should include briefing the provider community of the results, and being responsive to their needs for technical assistance and tools to facilitate remedial action. Public reports, in addition, should be released through media and community organizations (Balakrishnan 2003), and be accompanied by consumer educational forums to ensure broad understanding of quality indicators and how report cards can be used in discussions with providers and in selecting providers. To the extent that the public lacks knowledge about indicators of clinical competency of providers, a normative benchmark might be included.
Step 8: Consider changes to the provider payment scheme to reinforce report card incentives for quality improvement
Purchasers as sponsors of report cards are in a position to adopt financial bonuses and penalties to reinforce performance on key reported quality attributes (McNamara 2005a). The US Medicare programme, for example, has initiated a pilot programme to pay hospitals based on performance criteria, which include technical standards currently being tracked and reported (CMS 2003). Some US quality activists envision a continuum of purchaser efforts that progresses sequentially from private reporting to public reporting to quality-based payment.
Discussion
There is evidence that provider-specific comparative reporting, and in particular public reporting, enhances provider accountability and prompts improvements in quality of care.
Provider report cards could be tailored specifically to help address a country's priority concerns, or to complement global heath care initiatives. For example, comparative performance reports could be developed to examine quality disparities between providers serving the poorest of the poor and those serving more advantaged populations as part of a larger ‘pro-poor’ policy strategy. Similarly, report cards could be tailored to support the achievement of one or more Millennium Development Goals.
Detractors in developing countries may argue, as they have in the US, that quality of care is difficult if not impossible to measure, dooming report cards to failure. But contrary to what might be expected, a wide range of provider-specific indicators are already being used in several developing countries to track quality of care.
Some stakeholders may argue that report cards are not feasible in the under-resourced health care systems of developing countries, systems with limited information infrastructure. But pioneering communities in countries such as India and the Philippines have successfully implemented provider-specific report cards using data from patient satisfaction surveys. Uganda provides a more comprehensive model of performance monitoring and public disclosure. These examples provide evidence of report card feasibility within some developing country contexts. Report cards are not a strategy that is exclusive to developed countries.
As stakeholders – governments, quasi-governmental organizations, purchasers, provider professional associations, media and civil society – explore options to enhance provider accountability for quality of care in their respective countries and communities, provider-specific quality reporting is one worthy of debate. Interrelated design options span decisions related to private or public usage, sponsorship, type of provider, voluntary or mandatory participation by providers and measure sets.
Report cards are not intended as a panacea for all quality problems, even according to the most ardent of enthusiasts, but rather as one of a number of approaches that, in some country and community contexts, might be pursued to complement and enhance regulatory, payment and training activities as part of an overarching strategy to improve quality.
Biography
Peggy McNamara, MSPH, is a Senior Policy Analyst at the Agency for Healthcare Research and Quality, a US government agency conducting research to support evidence-based health care delivery. Her research portfolio includes a special focus on how health sector stakeholders in developed and developing countries, and in particular purchasers, promote quality of care. Recent projects include collaborations with the World Bank, Project HOPE, World Health Organization, the Institute of Medicine's Board on Global Health, and the Economic and Social Research Institute of Ireland. Previous international experience includes serving on the USAID management team for the Partnerships for Health Reform Plus project, and as a Fellow at the Eastern Health Board in Ireland. Past domestic policy experience includes work at the Blue Cross Blue Shield Association and the New Jersey State Department of Health. Ms McNamara earned her masters in public health from the University of North Carolina, where she was awarded a US Public Health Service Traineeship. She is currently a doctoral candidate at the University of Michigan's School of Public Health, where she was awarded a Pew Fellowship.
Although the author takes sole responsibility for this review, the clarity and organization of its content benefited greatly from thoughtful comments made by several individuals who generously contributed their time and expertise in reviewing a formative draft. Appreciation is expressed to: Jan De La Mare, US Agency for Healthcare Research and Quality; Itziar Larizgoitia, World Health Organization; Denise Remus, US Agency for Healthcare Research and Quality; William Savedoff, formerly with World Health Organization and now with Social Insight; and Nicole Valentine, World Health Organization. A special thank you is extended to Tomas Allen, World Health Organization Library, and Yafu Zhao, CODA, Inc., for their capable and prompt assistance with references.
This paper was supported by funds from the World Health Organization.
The opinions stated in this paper are solely those of the author and do not necessarily reflect the views of the World Health Organization, or those of the US Agency for Healthcare Research and Quality.
References
Abramson WB.
AHRQ.
Balakrishnan S.
Cercone J, Sanigest, Rosenmoller M.
Chassin MR.
CMS.
Consumer Checkbook.
Donabedian A.
Eichler R, Auxila P, Pollock J.
Fronczak N, Loevinsohn B, Lorn KS, Vun MC.
Heiby J.
Hibbard JH.
Hibbard JH, Stockard J, Tusler M.
Jack W.
JCAHO.
Jencks SF, Huff ED, Cuerdon T.
Kingdom B, Jagannathan V.
Langenbrunner JC, Liu X.
Leapfrog Group.
Marshall MN, Shekelle PG, Leatherman S, Brook RH.
McCormick D, Himmelstein DU, Woolhandler S, Wolfe SM, Bor DH.
McNamara P.
McNamara P.
McNamara P.
Medicare Payment Advisory Commission.
Mehrotra A, Lee S, Dudley RA.
Murray CJL.
Nightingale F.
Paul S.
Paul S. Undated. Making voice work: the report card on Bangalore's public services. Unpublished manuscript.
Rhode Island Department of Health, USA.
Schauffler HH, Mordavsky JK.
Schneider EC, Lieberman T.
Slack K, Savedoff WD.
Society of Thoracic Surgeons.
Uganda DISH.
World Bank.
World Bank.