Future Directions for the National Healthcare Quality and Disparities Reports

Chapter 6: Improving Presentation of Information (pt. 2)

Telling a Story in the NHQR and NHDR

The committee recommends that the NHQR and NHDR tell a clear and compelling story about the impact of making progress—or of not making progress. The ways in which information is presented and summarized in the reports and related products can enhance or impede users' understanding of the messages the reports are meant to convey. For that reason, the committee believes that AHRQ should move the reports from their current chartbook format to make them less a catalog of data and more of a comprehensive story that conveys key messages through text, graphs, and displays.7 In 2004, AHRQ was advised to use a chartbook format for future iterations of the NHQR (Gold and Nyman, 2004). The committee believes that doing this effectively requires enhancing the presentation of takeaway messages on the state of quality and disparities, focusing attention on closing gaps in performance, including benchmarks to allow comparisons with high-quality performance, identifying ways to affect change, and providing information that contributes to the development of the national health care data infrastructure (Box 6-2).

Box 6-2. Key Elements of Telling a Story in the NHQR and NHDR

Enhancing Takeaway Messages that Address Closing the Performance Gap

  • "At the current rate of change, it will take 'X' years to close the gap between current practice and the recommended standard of care (goal level or the benchmark).".
  • The net health benefit of closing the gap (including clinical preventable burden and cost-effectiveness) is quantified.
  • Areas on which to focus attention so as to more effectively improve quality are specified.

Identifying Ways to Effect Change in the Health Care System

  • Highlight the impact of evidence-based policies that can help drive change.
  • Provide data analyses.
  • Include vignettes or links to innovative practices that have resulted in higher performance.

Presenting Benchmarks and Other Data

  • Benchmark of best-in-class performance.
  • Between and within-state variation, when available.
  • Variation by sociodemographic variables (e.g., race, ethnicity, language need, socioeconomic status, and insurance status).
  • Data presented by accountable units, whenever feasible (e.g., types of payers, delivery sites).
  • Displays with visual clarity and embedded explanations of the essential finding(s).
  • Meaningful summarizations.

Contributing to the National Health Care Data Infrastructure

  • Illustrating developmental* and emerging measures even when only subnational data are available.
  • Highlighting when data are unavailable and when greater efforts are needed for national collection.

* Developmental refers to measures that are currently partially developed but not yet well tested or validated, or measures that have been validated but still lack sufficient national data on which to report.

Enhancing the Presentation of Takeaway Messages

In the Healthcare Research and Quality Act of 1999,8 Congress directed AHRQ to submit "an annual report on national trends in the quality of health care," and AHRQ has interpreted this as needing to present "assessments of change over time" (Moy et al., 2005). Although documenting the past performance of the U.S. health care system is important and historical data certainly play a role in forming a comprehensive picture of health care quality and disparities, users of the national healthcare reports have indicated that the performance of past years (especially more than 5 years ago) is not necessarily helpful for assessing where and how quality improvements can be made today (Lansky, 2009; Martinez-Vidal and Brodt, 2006). The committee believes it would be more useful for AHRQ to interpret national trends as a way to inform the future, using available historical data to inform readers of the likelihood of closing gaps in health care quality at the current pace. Forward-looking messages regarding national trends for the future could be determined using the following central pieces of information:

  • The Nation's current level of performance (expressed using means and standard errors).
  • How the Nation has achieved the current level of performance (expressed by the historical annual rate of change and standard error of the estimated change).
  • How far the Nation has to go to close the performance gap between current practice and the recommended standard of care (goal or the benchmark)—the number of years to achieving the desired performance level based on the historical annual rate of change and corresponding interval estimate

Using this strategy, AHRQ could transform its wealth of available trend data into an informative direction for the future. Possible templates for presenting rates of change and years to closing quality and disparity gaps are offered in Appendix H.

As previously described in Chapter 4, the impact of closing the gap would be determined as part of the measure selection/ranking process, and data gleaned (e.g., reduction in clinically preventable burden, increase in net health benefit, and cost-effectiveness) in determining the relative ranking of measures are useful and should be presented for each measure in the reports. Additionally, the benefit to the country—if, for example, all states were performing at the level of the highest one—would also be key information.

Presenting Benchmarks and Other Data

To better convey key messages, data displays should present benchmarks. The committee believes benchmarking is a key tool for continuous quality improvement. Thus, it is expected that benchmarks will change over time depending on the frequency of obtaining updated data from the sources for the national healthcare reports. Goals, on the other hand, tend to be fixed for a longer period and set by an advisory body or at the direction of some entity such as the Secretary of HHS. (Go to Chapter 2 for committee definitions of goals, benchmarks, and targets.) In the context of the national healthcare reports and AHRQ's role, the Future Directions committee emphasizes the use of benchmarks rather than goals because the committee believes the presentation of performance data, but not the setting of national goals, is within AHRQ's purview. Benchmarks reflect empirical facts. On the other hand, the committee believes that the setting of goals for health care quality improvement (e.g., for priority areas and/or measures) requires the direction of the Secretary of HHS.

Goals or fixed targets for measures can complement benchmarks and could be set at various levels of attainment. For example, they may be aspirational—"All patients shall receive." Goals might be set at a lower level if a finding from the measure selection assessment shows that there is little gain in health benefit beyond 85 percent of the target population receiving a service. Or a goal might be set for all states to achieve the rate of the best performing state.

Data illuminating who is delivering care and where care is delivered are necessary to identify opportunities for system change; these accountable units may be states, types of payers (e.g., Medicare, Medicaid, private insurance), or delivery systems. The committee encourages the development and presentation of these data in the reports and State Snapshots. This topic is addressed more fully in Chapter 2.

Identifying Ways to Affect Improvements in the Health Care System

Although the reports by themselves do not affect change, they can link to entities that have improved quality and reduced or eliminated disparities. For policy makers and those engaged in measurement and improvement, having the reports illustrate actual, effective quality improvement interventions alongside comparative data would be useful. As previously discussed, AHRQ's NHQRDRnet site links to AHRQ's Health Innovation Exchange, and this type of connection should also be included in the online version of the reports through embedded hyperlinks. Additionally, AHRQ should consider qualitatively highlighting "islands of excellence" (whether health systems, hospitals, or geographic regions) that consistently deliver recommended care that is less costly, more efficient, and produces better outcomes (Fisher et al., 2008). Such better performing communities or entities can be showcased in textboxes and sidebars.

Currently, AHRQ links State Snapshots to other measure report cards in specific states and should continue such nonfederal linkages. In addition to the Health Innovation Exchange, AHRQ might link with sources such as the Robert Wood Johnson Foundation's (RWJF's) Finding Answers: Disparities Research for Change program (http://www.SolvingDisparities.org Exit Disclaimer), The Commonwealth Fund's "Why Not the Best?" quality improvement resource (http://whynotthebest.org Exit Disclaimer), and the Institute for Healthcare Improvement's (IHI) Web site (http://www.ihi.org Exit Disclaimer). These sources, among others, offer multiple strategies for hospitals, providers, and other actors to improve the quality of health care. The links should be accompanied by an expressed caveat that the links are intended to highlight known best or promising practices, and that their inclusion should not be construed as an endorsement of the program or entity by AHRQ.

Using Benchmarks to Show Achievement

Benchmarks are one method of comparing data in order to improve the efficiency and the quality of health care (Deming, 1994). In Chapter 2, the committee defined a benchmark as the quantifiable highest level of performance achieved to date. (Some additional definitions of benchmarking are shown in Table 6-3.) Presenting performance data in the context of benchmarks stimulates debate around policy priorities, promotes transparency, fosters accountability, indicates what needs to be done, and supplies concrete milestones for evaluation and identification of areas to improve (Gawande et al., 2009; van Herten and Gunning-Schepers, 2000a,b).

Benchmarks identify "demonstrably attainable," superior performance and encourage others to emulate the practices by which this is achieved (Kiefe et al., 1998, p. 443). The original idea of using benchmarks in Continuous Quality Improvement and Total Quality Management (CQI/TQM) was that organizations could learn from the processes of an organization with better outcomes and adapt those processes, as appropriate, to their own circumstances (Dattakumar and Jagadeesh, 2003; McKeon, 1996). Benchmarking is not a static process; ideally, the level of best performance will continually evolve as positive progress is made, and the benchmark will move accordingly. At each successive stage—or in each publication year of the NHQR or NHDR—a different entity has the potential to take the role of "best-in-class," which may engender a "race to the top" (Weissman et al., 1999).

The committee proposes approaches to benchmarking that AHRQ could incorporate into the NHQR, NHDR, and related products. The benchmarking approaches proposed by the committee do not require AHRQ to develop targets that must be attained by a specific endpoint (as has been done for Healthy People 2010); rather these strategies use benchmarks to highlight standards of care that are reported in data available to AHRQ.

The Current Use of Benchmarks in the National Healthcare Reports

The NHQR and NHDR were initially envisioned as a means to provide policy makers with snapshots of quality and disparities over time and to allow "providers and payers" to "assess their performance relative to national benchmarks" (Moy et al., 2005, p. 377). The hope was that government agencies, communities, and providers would turn to the NHQR and NHDR to compare their own health care data against national progress. Until recently, AHRQ used only an implicit benchmark—namely, the need to strive for better-than-average performance. Displays in the reports imply that states with rates below average performance should aim to achieve performance rates better than average. In a 2006 review of AHRQ's presentation of state data, state policy makers indicated that presenting performance relative to the national average was misleading: while a state may have been doing better than average on a given measure, if the average was low compared to the recommended standard of care, the level of performance could be taken out of context to indicate that the state need not focus quality improvement efforts in that area (Martinez-Vidal and Brodt, 2006).

For a limited number of measures in the 2008 NHQR and NHDR, AHRQ reports targets established for Healthy People 2010. Partially because Healthy People focuses on measuring health improvement rather than health care improvement, these targets are not available for all measures presented in the NHQR and NHDR. The Healthy People targets are not tied to actual performance achieved by providers and health care organizations, and most targets are consequently viewed as aspirational.9 According to the committee's definition, a benchmark should be demonstrated as being attained by some defined entity, not just as being aspirational. For this reason, Healthy People targets tend not to be the ideal source of benchmarks for the national healthcare reports. While the inclusion of these targets may be useful and warranted as one point of information, they should be presented in conjunction with more realistic benchmarks.

Presenting Best-in-Class Benchmarks

One of the most common and easily understood methods of benchmarking is to provide comparisons relative to top performing nations, states, geographic regions, or health care entities. A key issue in benchmarking is whose performance is being measured and to which audiences the benchmark is relevant. In health care quality improvement, best practices can occur at various levels of the health care system, including at the individual physician level (Kiefe et al., 2001); at the service provision level, such as in intensive care units (Zimmerman et al., 2003); at the health care system level; or at the state level (Reintjes et al., 2007). The committee also explored establishing benchmarks at discrete levels of the health care system (e.g., top decile of hospitals), as well as at the state level. Defining a benchmark can depend on the "class" from which the measure is derived. For example, a benchmark might provide information on the best performance rate among states, the best performance rate among hospitals, the best performance rate among large hospitals, or the best performance rate for care received by Hispanics in any state.

Although it is technically true that AHRQ could choose any "class" from which it would designate a "best-in-class" benchmark, the committee finds that in the context of the national healthcare reports, where much of the analysis is done at the state level, setting benchmarks by state may appeal to a number of relevant audiences and may be most feasible given data availability. State-level data are generally available to AHRQ, and thus state-level benchmarking units can be determined for many, although not all, measures in the NHQR and NHDR. This approach could satisfy the needs of congressional and state policy makers, principal audiences to which the reports are geared. A 2004 AHRQ publication A Resource Guide for State Action was designed to help states assess the quality of care in their states and develop strategies to address gaps in quality. The Resource Guide advised that the "rate for the top State or top tier of States" may be "assumed to be a feasible goal for States to achieve" (Coffey et al., 2004).

Figure 6-1 shows that it is possible to display a best-in-class benchmark (a state in this instance) along with the national performance average and the Healthy People target. The committee does not intend that the style, format, and layout of this figure be adopted by AHRQ; rather, the committee presents this figure to show the relationship of a benchmark relative to the type of performance data that are in the domain of the NHQR or NHDR. From the perspective of the NHQR, which tends to provide state-based data as well as national average performance, the highest performing state, Oregon, provides a benchmark that could be applied across both reports.

The committee recognizes that some measures and their corresponding data sources may be amenable to choosing a different benchmarking "class" than a state. A measure that uses only HEDIS data may, for instance, lend itself to analyzing data by health plans. Thus, AHRQ could present a best performing plan as the benchmark. Similarly, AHRQ might decide that the hospitals comprising the HCUP datasets constitute a comparable set of observations and could present a best performing hospital as the benchmark.

Denoting a best-in-class benchmark is as important for measures in the NHDR as it is for measures in the NHQR, and the committee concludes that for each measure, the benchmark used in the NHDR should mirror the benchmark used in the NHQR. The goal of quality improvement efforts should not be to strive just for the Hispanic population to receive care at the rate of the non-Hispanic population. Rather, quality improvement efforts should aim to improve the quality of care for all populations. In the case of the NHDR, different disparity populations would be compared against the quality benchmark in addition to being compared against the best performing population. For example, AHRQ may establish a state-based benchmark for a specific measure of lipid control and use this same benchmark in both the NHQR and NHDR. The committee recognizes that reporting in the NHDR which state has the best rate on lipid control by specific populations would be useful (e.g., reporting that X state has the highest performance level for Hispanics and Y state has the highest performance level for African Americans), but such data are not always available. Adopting a separate benchmark based on the best performing population group within a "class" can prove difficult as there are multiple population groups studied in the NHDR and detailed data are not always available or sample sizes may be too small to stratify population data by hospital, health plan, or even state. Ideally, data would be available for sociodemographic descriptors within whichever class a benchmark was being set; when they are not, this leads to looking to an alternate solution for presenting a benchmark in the NHDR. The committee advises AHRQ that the benchmark can be the best performing state or can come from the class of units compared in the measure's data source. When the data are available, the committee encourages AHRQ to present multiple population-specific benchmarks (i.e., a benchmark that is uniform with the NHQR as well as other benchmarks that are population specific). When multiple achievement levels are available, alternatives to presenting the data graphically may be needed (e.g., listing in text boxes).

The committee encourages the analysis of performance data by accountable units (e.g., states, health plans, hospitals). When it is feasible for AHRQ to analyze data for a measure by multiple accountable units, there is the possibility for multiple benchmarks of attained performance for one specific measure. Presenting multiple benchmarks might add clutter to graphs, so AHRQ may choose to present the multiple achievement levels in a sidebar text box.

The Future Directions committee believes benchmarks provide a means to supply concrete milestones for comparison and evaluation. For comparative purposes, having a uniform benchmarking unit such as a state may be useful, although other classes (e.g., plans, hospitals) may be informative for entities implementing programs to improve quality and eliminate disparities. Thus, the committee recommends:

Recommendation 7: To the extent that the data are available, the reporting of each measure in the NHQR and NHDR measure set should include routinely updated benchmarks that represent the best known level of performance that has been attained.

Data Limitations in Benchmarking

As discussed above, AHRQ could present data on a high-performing entity for which data are available (e.g., the best performing health plans based on data from the National Committee for Quality Assurance). This approach, however, may require particular attention to issues of statistical reliability. The population distribution from which a benchmark is derived must be considered carefully so that entities are not evaluated against a population that is not well-matched to their particular case-mix, geography, or other relevant factors (Linderman et al., 2006). When the population of analysis includes high-performing entities that have a small number of cases, the analysis must be corrected to account for the small-numbers problem (Normand et al., 2007). There are techniques—including the Achievable Benchmarks of Care method, which uses a Bayesian estimator to reduce the impact of entities with a small number of eligible patients—that AHRQ could use to adjust for the small denominator problem (i.e., if a plan had only one qualifying patient, then the performance of that plan could be either 0 percent or 100 percent) (Weissman et al., 1999).

As an additional consideration, data on state performance may be unavailable for all measures. Although the State Snapshot Web site does not include state data for 26 of AHRQ's 46 core measures, the committee finds it feasible for AHRQ to obtain state data for some of these (e.g., access measures, measures from Centers for Disease Control and Prevention data).10 Furthermore, for measures in which data on the best performing state are available, not all states may have reported on the measure or been included in analysis (e.g., the Healthcare Cost and Utilization Project). Therefore, the best performing state may actually be the "reported best performing state." AHRQ may consider recognizing this in either introductory text or in a footnote.

For many measures of health care quality, even the highest performing state, population, or provider does not deliver the level of care recommended in guidelines. Benchmarking within a field of low performers may result in further underperformance because low performance is seen as normal (Reinertsen and Schellekens, 2005). AHRQ should take this into consideration when determining the class from which to derive a benchmark and should ensure the benchmark represents a desirable level of performance.

7 In 2004, AHRQ was advised to use a chartbook format for future iterations of the NHQR (Gold and Nyman, 2004).
8 Healthcare Research and Quality Act of 1999, Public Law 106-129 §902(g) and §913(b)(2), 106th Cong., 1st sess. (November 19, 1999).
9 The HealthyPeople 2010 targets are, in almost all cases, higher than the currently achieved national progress or even the best performing state. For some measures presented in the NHQR and NHDR, however, performance is at or above the Healthy People target. For example, the composite measure for children ages 19-35 months who received all recommended vaccines includes the Healthy People target of 80 percent attainment. The national average for this measure was at 80.6 percent, achieving this target.
10 Because the State Snapshots were initially developed to supplement measures in only the NHQR, access measures are not included in the State Snapshots; in accordance with the committee's recommendation to integrate access in the quality portfolio of measures, it is important for AHRQ to include access measures in the State Snapshots.

Return to Contents
Proceed to Next Section

Page last reviewed October 2014
Page originally created September 2012
Internet Citation: Chapter 6: Improving Presentation of Information (pt. 2). Content last reviewed October 2014. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/iomqrdrreport/futureqrdr6a.html