Evaluation of the Use of AHRQ and Other Quality Indicators
Chapter 3. The Market for Quality Indicators
Table of Contents
Our environmental scan revealed strong demand for hospital care quality indicators. Demand for indicators for research and quality monitoring is strong and has a relatively long history. Demand is higher and increasing rapidly for quality indicators that can be used for other, newer purposes. These purposes include public reporting to inform consumers' choice of providers and otherwise drive provider improvement; pay-for-performance to reward high-quality providers; the development of tiered insurance products; and using quality indicators to select a network of providers.
This demand has led to a proliferation of quality indicators. In addition to AHRQ, the market leaders in developing hospital quality indicators are the Centers for Medicare and Medicaid Services (CMS), the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the Hospital Quality Alliance (HQA—a collaboration between CMS, JCAHO, and several other organizations), and the Leapfrog Group. In this section, we discuss these and other developers and vendors of quality indicators, and how the quality indicators developed by each of these agencies/organizations compares to the AHRQ QIs. Our environmental scan identified two main categories of players in the market for quality indicators. The first type, "developers," includes organizations that develop, support, and distribute quality indicators. The second type, "vendors," includes organizations that develop and/or sell quality measurement products to providers, insurers, and others. Their products often include the AHRQ QIs (or variants thereof), indicators from other developers, and/or indicators developed by the vendors themselves.
The environmental scan identified 12 organizations that have developed indicators that are similar in some way to the AHRQ QIs. The organizations that have developed indicators that are widely used and focused on hospitals are summarized in Table 3.1 and described below.
Although there are similarities between these indicators and those developed by AHRQ, none of the indicators developed by organizations other than AHRQ were comparable to the AHRQ QIs on all of their major characteristics: based on administrative data, outcome-focused, hospital-focused, based on transparent methodology, and available for public use.
JCAHO/CMS/HQA. Both JCAHO and CMS have developed quality indicators of hospital care for common conditions. CMS's measures were originally used for quality improvement initiatives conducted by Medicare Quality Improvement Organizations (QIOs). JCAHO's Core Measures have been used as part of the JCAHO hospital accreditation process since 2002. They cover five clinical areas: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, (4) surgical infection prevention, and (5) pregnancy and related conditions. JCAHO-accredited hospitals choose 3 of these 5 areas for reporting, depending on the services they provide. JCAHO publishes the results of the measures publicly on the Web.21
Since the measures had significant overlap, CMS and JCAHO agreed in 2004 to align specifications for overlapping measures and to maintain them as a shared set of measures. A subset of the joint CMS-JCAHO measures was later selected by the HQA, a public-private partnership for measuring and reporting hospital quality. Their Hospital Quality Measures are now publicly reported on the Web for both accredited and non-accredited hospitals.22 They are also used in other CMS activities such as the Premier pay-for-performance demonstration project.23
Like the AHRQ QIs, the CMS/JCAHO/HQA measures are widely used and viewed as a national standard.d A key difference between those measures and the AHRQ QIs is that they are largely based on clinical data collected from medical records rather than administrative data. JCAHO has estimated that collection of the clinical data for the Core Measures takes an average of 22-27 minutes per case for acute myocardial infarction, heart failure, and pneumonia.24
A second key difference is that the CMS/JCAHO/HQA measures are process indicators while the AHRQ QIs are outcome indicators. Another difference is that, while the AHRQ QIs reflect a broad range of conditions, the CMS/JCAHO/HQA measures currently reflect only five conditions; however, JCAHO and CMS are currently developing indicators in additional clinical areas.
The method used by JCAHO to implement its Core Measures is also different from that used for the AHRQ QIs. Hospitals pay vendors to measure the JCAHO Core Measures on their behalf using standardized specifications. Hospitals have made a wide variety of arrangements with vendors for Core Measure collection and reporting, according to their specific needs and characteristics. All vendors of the JCAHO Core Measures must undergo a certification process through which JCAHO ensures that they have appropriately implemented the measures.
Due to these differences, the CMS/JCAHO/HQA measures and the AHRQ QIs can be considered complementary in some respects. A number of the users of the AHRQ QIs interviewed (11 of 36) also use the JCAHO/CMS/HQA measures.
The only way in which the CMS/JCAHO/HQA measures and the AHRQ QIs could be considered competitors is as a function of limited hospital resources for quality measurement. Hospitals are required to report the JCAHO Core Measures for accreditation and may have limited resources for other quality measurement activities, including the AHRQ QIs. One interviewee told us:
AHRQ could do a lot of terrific things with the AHRQ QIs, but facilities are trying to meet requirements right now and don't have time and resources to work with other quality indicators to the exclusion of what they might like to do. Hospitals are doing only what they have to do—either by mandate or by the market.e
Leapfrog. The Leapfrog Group has developed a set of quality indicators that are widely used and considered to be a national standard. The indicators are intended to improve value in health care purchasing. Provider performance on the indicators is presented in a public report on Leapfrog's Web site. In addition to developing and marketing its own quality indicators, Leapfrog operates a pay-for-performance program, the Leapfrog Hospital Rewards Program, which uses JCAHO Core Measures and an efficiency measure in addition to the Leapfrog indicators. The program is implemented through vendors, who pay Leapfrog for every participating hospital, and then charge hospitals accordingly.
Unlike the AHRQ QIs, most of the Leapfrog indicators are not outcome-focused and require primary data collection. The indicators are organized into four content areas called "Leaps": (1) computerized physician order entry, (2) intensive care unit staffing, (3) high-risk treatments, and (4) safe practices. Data are collected through a survey of hospitals. Leaps 1, 2, and 4 are structure and process indicators, such as use of a computerized physician order entry system or staffing hospital intensive care units with intensivists (physicians who specialize in critical care medicine). Leap 3 (high-risk treatments) overlaps considerably with the AHRQ IQIs. It measures procedure volume and risk-adjusted mortality for selected conditions. Leapfrog is currently standardizing its specifications to those used in the AHRQ IQIs in order to minimize the reporting burden for hospitals.
Institute for Healthcare Improvement (IHI). The IHI measures overall hospital mortality as part of its activities to improve hospital quality. This measurement activity is conducted in conjunction with the implementation of a specific set of interventions that are intended to improve quality in participating hospitals. The indicator used is similar to the AHRQ IQIs in that it is based on risk-adjusted mortality associated with hospital stays and is based on the analysis of administrative data. Unlike the AHRQ IQIs, however, the IHI measures the mortality rate for all conditions. Hospital- and area-level characteristics are used in regression models to control for patient risk. This measurement approach originated in the United Kingdom and has also been applied to hospitals in many countries other than the United States.25
States. We also interviewed representatives from California and Pennsylvania, two states that have developed their own methodologies for measuring quality using administrative data. These states developed their own measurement approaches largely because their public reporting efforts predate the development of the AHRQ QIs. Both states also use data elements that are unavailable in the hospital administrative data collected in most other states. These features include a flag to indicate conditions that were present on hospital admission (California) and detailed data on severity of illness (Pennsylvania). Other states, such as New York, have also developed their own measurement approaches which may predate the AHRQ QIs or use data elements not available in other states.
Vendors. We interviewed several vendors who, in addition to implementing existing measures from other developers in their measurement tools, have also developed proprietary indicators. Some of these indicators are similar to the AHRQ QIs in that they are based on administrative data and are outcomes indicators. The key difference is the definitions and specifications of most vendors' indicators are proprietary. The vendors' indicators have also not always been subjected to validation of the same rigor as the AHRQ QIs. In the next subsection, we discuss the vendors identified by the environmental scan in more detail.
Typically, the AHRQ QIs are included in software tools that are marketed to hospitals for quality improvement or to insurers or business groups for hospital profiling. The vendors' products offer additional functionality to the basic AHRQ QI software. For example, the vendors' measurement tools often include non-quality indicators that inform hospital administration, such as financial performance indicators. The tools are often designed to offer users a variety of reporting options. These measurement tools may be particularly useful for hospitals that do not have the in-house expertise or staff time to construct indicators of quality and other aspects of care from raw data. Similar tools are used by insurance companies and other organizations.
As mentioned above, many of these tools include proprietary quality indicators developed by the vendors themselves. In addition, many of the vendors are licensed to implement the JCAHO Core Measures, and many also produce indicators from other developers, such as Leapfrog.
Some users of the AHRQ QIs whom we interviewed use vendors for their measurement activities and expressed a high degree of satisfaction with the vendors' services. On the other hand, some users expressed a concern that the AHRQ QIs as implemented by some vendors may differ in key respects from the official AHRQ QI specifications, and that the proprietary nature of the tools makes these differences non-transparent. One hospital association captured this sentiment:
The AHRQ QIs are standardized measures, risk-adjusted, and not in a "black box" so we can get the numerator and denominator and make them accessible to hospitals. The industry is sick and tired of vendors and consulting firms creating black boxes.
Another interviewee sounded similar themes:
The problem is that if there's any "black box" methodology, [users] won't touch it—it's politically dead, even if there is an underlying valid scientific process. Hospitals want to check their own numbers. [The vendors'] offers sound nice. The problem is, a hospital can't replicate the findings or understand differences in methodology/calculations. [Users] like transparency, a tool that is open, where everyone can see what is happening, hospitals can replicate the results, then everyone can talk about the differences. It democratizes quality reporting.
While the quality indicators developed by organizations other than AHRQ share certain characteristics with the AHRQ QI program, there are no other sources of indicators that are viewed as a national standard and are also publicly available, transparent, hospital-focused, outcome-focused, and based on administrative data. Many of our interviewees stressed that the AHRQ QIs fill an important void in this respect. A representative of an integrated delivery system described the process of searching for quality indicators that could be readily used for monitoring quality and guiding quality improvement activities:
When we started looking for indicators, we really struggled to find valid quality measures based on readily available data and with benchmark potential. Without manually auditing patient charts, and coming up with numerator and denominator definitions on our own, there was no way we could do it by ourselves. AHRQ offered the set of measures prescribed for our needs.
A representative of a state doing public reporting told us:
If we didn't have the AHRQ QIs, it would be difficult as a state to come up with our own indicators and there are not many other choices that are based on administrative data. Until electronic medical records are commonplace (5-10 years at least), we need to deal with using administrative data.
An insurance company representative highlighted the importance of AHRQ's role in the quality indicator market, stating that more marketing of the QIs is needed:
AHRQ is doing something that no one else is doing. We have to have a national standard, something used across the country for comparison. [Does AHRQ] realize they're one of the only good options out there? They should really pick up the outreach so that others will pick up using the QIs.
3.3.1. Overview of users and uses of the AHRQ QI
AHRQ's unique position in the market for quality indicators has led to a wide proliferation of uses for the AHRQ QIs. Our environmental scan of users of the AHRQ QIs identified 114 users of the indicators and a range of different purposes, including public reporting, quality improvement/benchmarking, pay-for-performance, and research. Table 3.3 summarizes the number of users of the AHRQ QIs by type of organization and purpose of use.
The most common uses of the AHRQ QIs include:
- Research. We identified 43 organizations that use AHRQ QIs for research. For example, Leslie Greenwald and colleagues used the AHRQ QIs to compare the quality of care provided in physician-owned specialty hospitals and competitor hospitals.26
- Quality improvement. We identified 23 organizations that use the AHRQ QIs as part of a quality improvement activity, including reports benchmarking performance against peers; however, these organizations do not release the quality information into the public domain.g
- Public reporting. We identified 20 organizations using the AHRQ QIs for public reporting. We classified an activity as "public reporting" if a publicly available report was published that compares AHRQ QI results between hospitals or geographic areas such as counties. The organizations using the AHRQ QIs for public reporting, with Web links to the reports, are listed in Table 3.4.
- Pay-for-Performance. We identified 4 organizations that are using the AHRQ QIs in pay-for-performance programs. Three were health plans and one was a CMS demonstration project.
3.3.2. Uses of Specific AHRQ QIs
We asked users of the AHRQ QIs, and vendors of quality measurement packages including the AHRQ QIs, which specific QIs they were using. Among organizations we interviewed, the PSIs and IQIs were used more frequently than the PQIs. Of the 42 organizations, 33 were using the PSIs, 30 were using the IQIs, and 17 were using the PQIs.
Within the PSI and IQI sets, some indicators were used more frequently than others. Users of the PQIs, on the other hand, were more likely to use every PQI. There were no meaningful differences in the frequency of use of particular PQIs (data not shown).
18.104.22.168. Use of IQIs
Figure 3.1 shows the frequency of use of each IQI by the users and vendors we interviewed. The IQIs that reflect mortality rates for medical conditions were used most frequently, particularly:
- IQI 16—congestive heart failure mortality (23 users).
- IQI 17—stroke mortality (23 users).
- IQI 20—pneumonia mortality (22 users).
The rates of mortality for certain medical procedures were also commonly used, particularly:
- IQI 12—coronary artery bypass graft mortality (23 users).
- IQI 13—craniotomy mortality (19 users).
- IQI 11—abdominal aortic aneurysm repair mortality (18 users).
- IQI 14 —hip replacement mortality (18 users), and
- IQI 30—percutaneous transluminal coronary angioplasty mortality (18 users).
The procedure volume indicators were used less frequently, and the procedure utilization rates, both hospital- and area-level, were used least frequently.
22.214.171.124. Use of PSIs
Figure 3.2 shows similar counts of the frequency of use of each PSI. The area-level PSIs were used less frequently than the hospital-level PSIs. Among the hospital-level indicators, there was considerable variation in frequency of use between the indicators. The most frequently used PSIs were PSI 12—postoperative pulmonary embolism (PE) or deep vein thrombosis (DVT) (28 users), PSI 8—postoperative hip fracture (26 users), and PSI 13—postoperative sepsis (26 users). The least frequently used hospital-level PSIs were PSIs 18, 19, and 20—obstetric trauma with instrument, without instrument, and during cesarean delivery (15, 16, and 15 users, respectively).
3.3.3. International uses of QIs
Measuring quality of care has become a policy priority in many countries outside of the United States, and numerous countries and international organizations are in the process of instituting requirements for data collection and reporting of quality indicators.27 The AHRQ QIs are an attractive option for international users, since many countries already require hospitals to report the required administrative data.
Perhaps the most visible international endeavor is the Organization for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) Project. The OECD is an intergovernmental economic research institution headquartered in Paris, France, with a membership of 30 countries that share a commitment to democratic government and the market economy. One of its widely used products is OECD Health Data, which provides internationally comparable information on infrastructure, cost and utilization at the health system level,28 but so far no information on quality of care. In an attempt to bridge this gap, in 2003 the OECD brought 23 of its member countries together with international organizations such as the World Health Organization (WHO) and the European Commission (EC), expert organizations such as the International Society of Quality in Healthcare (ISQua) and the European Society for Quality in Healthcare (ESQH), and several universities.29 The goal of the meeting was to work on the development and implementation of quality indicators at the international level.
The project initiated its work with two major activities. The first was an effort to introduce a pilot set of quality indicators that can be reported by a substantial portion of the OECD countries. This activity recently led to the 2006 release of an initial list of indicators and corresponding data.30 The second activity was to identify additional quality indicators for five priority areas: cardiac care, diabetes mellitus, mental health, patient safety, primary care/prevention/health promotion. Through an expert panel process, 86 indicators were selected for the five areas and the OECD is currently investigating the availability and validity of required data.31 Several AHRQ PSIs were selected for the patient safety area32 and an indicator similar to the PQIs was selected for the primary care area.33
Researchers from several countries have tried to run the PSIs against national discharge data both as part of their participation in the HCQI Project and also for other projects. This has been attempted in Canada, Germany, Italy, and Sweden. In addition, a group in Belgium successfully constructed some of the HCUP indicators, the predecessors of the AHRQ QIs, from national data.34
At this point, results from those projects are largely unpublished in English-language journals. But during a recent OECD meeting in Dublin, Ireland, experts from 15 countries discussed issues around the adoption of the AHRQ PSIs in countries other than the United States. Researchers from several countries had cross-walked the AHRQ PSI specifications, which are based on the U.S.-specific ICD-9-CM diagnosis codes, to ICD-10 diagnosis codes, which most countries are now using.
This conversion was found to be unproblematic, in particular because only a limited number of diagnosis codes had to be cross-walked to construct the indicators. A greater issue turned out to be the conversion of procedure codes. The AHRQ definitions are based on the ICD-9 procedure classification, whereas other countries use national procedure classification systems. Similarly, other countries use different DRG groupings than those used in the United States. Substantial work on mapping the different coding systems used in the U.S. and in other countries is needed.
In countries that have tested the AHRQ PSIs, the average rates were reported to be similar to those observed in the United States. Countries that do not yet have DRG-based prospective payment systems saw much lower rates, possibly resulting from less thorough coding of secondary diagnoses in the absence of implications for payment.
Our interviews show that there is interest in using the AHRQ QIs internationally and sufficient data and technical capability to implement them. This makes it likely that some AHRQ QIs will be adopted by the OECD HCQI Project for international comparisons of quality of care and patient safety. Furthermore, as several international organizations are striving to align their measurement and reporting activities,h selected AHRQ QIs could become part of an international measurement standard.
3.3.4. "Non-users" of QIs
We identified and interviewed representatives of five organizations that are currently using quality indicators other than the AHRQ QIs but that are similar to the AHRQ QIs. Our goal was to understand why these organizations were not using the AHRQ QIs, and specifically whether this decision was based on an evaluation of the merits of the QIs. Three of the organizations were using quality indicators that they had developed themselves and which predated the AHRQ QIs. They did not voice any strong objections to the AHRQ QIs but preferred their own indicators due to various methodological factors and the fact that their indicators were better tailored for their specific needs. The other two organizations had elected not to use the AHRQ QIs because they were not already in use by the hospitals that would be participating in the organizations' quality measurement activities. The JCAHO Core Measures were chosen instead because they were already being collected by hospitals.
3.3.5. Examples of uses of QIs
In order to illustrate how the AHRQ QIs are being used, we have chosen examples of specific uses for quality monitoring, public reporting, and pay-for-performance.
126.96.36.199. Example of AHRQ QI use for quality improvement
Figure 3.3 was drawn from a report provided to hospitals by a hospital association we interviewed. Reports such as the one we reviewed are sent to hospital CEOs quarterly. The reports include all AHRQ IQIs (shown in the example) as well as all AHRQ PSIs. The report also includes indicators from JCAHO and Leapfrog (not shown). Hospitals are presented with targets based on benchmarks calculated by the hospital association. The hospital association works with hospitals to help them explain why they do not meet targets in areas of poor performance.
188.8.131.52. Example of AHRQ QI use for pay-for-performance
The State of Florida uses the AHRQ QIs as part of a public reporting tool aimed to help consumers choose a hospital. Figure 3.4 captures a segment of a Web page comparing hospitals in Broward County on one of the AHRQ IQIs, postoperative hip fracture (IQI 19). Users can click on a hospital to get more detailed information on quality as well as the hospital's characteristics (teaching status, non-profit status, etc.) and location.
184.108.40.206. Example of AHRQ QI use for public reporting
Figure 3.5 is drawn from a report provided to hospitals by an insurer we interviewed. The example extracts one AHRQ PSI (PSI 12), postoperative pulmonary embolism (PE) or deep vein thrombosis (DVT). The report allows hospitals to compare their performance to that of their peers. Good performers earn an incentive payment.
f. We attempted to determine whether vendors' proprietary products included the AHRQ QIs, but since limited information is available from some vendors, some mistaken attribution is possible. There are also other vendors with similar quality measurement products that do not include the AHRQ QIs, but they were not included in our study.
g. Due to the methods used to identify users, the scan is likely to have significantly undercounted the number of organizations (especially hospitals and hospital associations) using the AHRQ QIs for internal quality improvement activities, since this type of use rarely results in publicly available information that could be used to identify the user in an environmental scan.
Page originally created September 2012