Module 3: Information—Interpreting State Estimates of Diabetes Quality

Diabetes Care Quality Improvement: A Resource Guide for State Action

Module Overview:

  1. Deriving Information From Data
  2. Step 1: Identifying Appropriate Metrics and Comparisons
    1. Benchmark Metrics for States
    2. Understanding State Variation
    3. Four States Compared to Benchmarks
  3. Step 2: Interpreting the Data: What Does It Mean?
    1. Factors That Affect the Quality of Diabetes Care
    2. Interpreting Process and Outcome Measures Together
  4. Summary and Synthesis
  5. List of Associated Appendixes for Use With This Module

Key Ideas in Module 3:

  • The need for information for understanding and planning is the reason to assemble data on diabetes care.
  • Analysis of the NHQR data tables can answer some key questions for States:
    • What measures should be used to set goals for quality diabetes care?
      • Consensus-based measures with national endorsements
    • What goals should be set as targets for specific measures?
      • Best-in-class estimates of achievable and practical levels
    • What factors influence a State's position among other States?
      • Health system factors, consumer behaviors, and immutable population attributes
  • Process and outcome measures should be considered together to assess a State's diabetes care quality.
  • State-level baseline estimates of diabetes care allow States to assess their starting point and to evaluate their progress over time.
  • State-level baseline estimates across all conditions studied in the NHQR afford State leaders a broad view of health care quality in their State.

Deriving Information From Data

Data do not necessarily convey information. Information comes from data that have been collected, analyzed and arranged to answer a question. Deriving information from data usually requires original data collection designed to answer the question. However, "secondary" use of data collected for another purpose can often lead to powerful information, obtained efficiently.

Both original and secondary data collection require strategies for summarizing and interpreting the results. For example, to determine how well the health care system has educated and motivated people with diagnosed diabetes to control their blood glucose levels requires original data collection of HbA1c laboratory values from clinical records. The resulting values of HbA1c levels must be summarized (e.g., using overall averages), explored by relevant subgroups (e.g., managed care versus private practice to determine how well providers in different settings educate and motivate their patients), and interpreted in terms of how well the assembled database answers the question and represents the total population (e.g., data collected from clinical records miss people without access to health care with undiagnosed diabetes).

Secondary data assembled from various sources for the NHQR address the overarching question of how well the U.S. health care system provides health care for U.S. residents. Although State-specific estimates are provided in the NHQR for many measures, they are not fully analyzed there from a State perspective.

Steering committees for State quality improvement programs need information to answer many questions on the State's health care quality performance. Among them are:

  • What measures should the State use to assess health care quality?
  • What metrics and comparisons for each measure should be used to compare with the State?
  • What does the State's position among other States mean?
  • What goals should be set for quality improvement?

While all the questions that a quality improvement committee might raise will not be answerable from data in the NHQR, it is a valuable source for identifying readily available and consensus-based measures for locating national averages, for deriving other benchmarks, and for selecting achievable targets for improvement. This module shows how to do these things from a State viewpoint. Module 2 presented a minimum set of measures from the NHQR that can be used for assessing diabetes quality within the State. Module 3 uses that measure set to describe two steps:

Step 1: Identifying appropriate metrics and comparisons
Step 2: Interpreting the State's position among other States

While the specific questions that State leaders ask about the quality of health care in the State will determine the comparisons to be made, below is a general guide to thinking about and using the data in the NHQR to create information for State quality improvement programs.

Return to Contents

Step 1: Identifying Appropriate Metrics and Comparisons

Benchmark Metrics for States

The NHQR provides a national set of estimates and some State estimates that can be used as benchmarks for quality improvement. A benchmark is an external marker for assessing how one entity, for example a State, compares. The benchmark can represent the best performer or the average performer. How the State fares depends on what the benchmark is.

Several types of metrics or benchmarks can be used for assessing a State. From more to less stringent, they include:

  • The theoretic limit of aiming for 100-percent achievement (or 0-percent occurrence for avoidable events), which is an ideal but often impractical goal.
  • A best-in-class estimate of the top State or top tier of States (the top 10 percent of States is used in this Resource Guide), which shows what has been achieved.
  • A national consensus-based goal, such as Healthy People 2010, set by a consensus of experts; such goals may be set more or less stringently than other benchmarks.
  • A national average over all States, which shows the norm of practice nationwide but, being an average estimate, will represent a weaker goal than the best-in-class estimate.
  • A regional average, which a State can use to compare itself to other States that are more likely to face similar environments, but as a goal it will be less aggressive than the best-in-class goal.
  • An individual State rate, which itself can be used as a baseline against which to evaluate State-level interventions and progress over time within the State or to offer as a norm for local provider comparisons.

Most of these benchmarks can be found in or derived from the NHQR. The best-in-class estimate is not reported in the NHQR, nor is the regional norm based on BRFSS data. Both, however, can be derived from data in the NHQR. Detail on how the best-in-class estimate and other benchmarks are derived is given in Appendix D.

These benchmarks for each of the diabetes care-related measures in the NHQR are reported in Table D.1 in Appendix D. These benchmarks for four measures—HbA1c test, eye exam, foot exam, and flu vaccinations—are graphically displayed in Figure 3.1.

For HbA1c testing, for example, Figure 3.1 shows a range of benchmark values. Though the theoretic limit may be difficult to achieve for many valid reasons, the best-in-class estimate has been achieved by some States. The national average is often used to assess a State's performance. However, Figure 3.1 makes it clear that the national average is not a very difficult level to achieve; about half of the States are above and about half are below that average. The same is true for regional estimates that take into account the practice patterns in different regions of the country.

For the eye and foot exam process measures in Figure 3.1, the best-in-class average is above the national Healthy People 2010 goal, which itself still exceeded the national average in 2001. The influenza vaccinations for adults with diabetes have the lowest rates of these process measures partially because adults over age 64 are excluded from this measure (while they are included in the other three measures). Moreover, Healthy People 2010 did not set a goal for influenza immunization of this population.

Understanding State Variation

Although comparing the State's rate to a benchmark shows how far or close the State's rate is from the benchmark, it gives few clues as to the State's position among all other States. If a State's rate is below the national average, is it the lowest of the low States? Or, is it doing better than all the other States that are below the national norm? Knowing this ranking can help a State understand how much effort might be needed to catch up with health care quality in other States.

Average benchmark values do not reveal the degree of variation that exists on any one measure across the Nation. Variation among States can be seen on a scatter diagram, where each State represents one point on the graph. Other indicators can be added to and identified on that scatter diagram with different symbols. Figure 3.2 shows (as gray diamonds) the distribution of State rates for important tests that should be performed each year for people with diabetes. It superimposes the national average as a black square and the best-in-class average as a black triangle.

Figure 3.2 reveals that immunization for influenza for people with diabetes has the most State variation among the four measures and it has the lowest rates — providers in one State vaccinated only 17 percent of adults aged 18 to 64 with diabetes. The spread among the States is nearly fourfold, from 17 to 64 percent. Also, the other tests are performed with wide variation — spread from about 50 percent to 80 or 90 percent of adults with diabetes, across the States. Such variation indicates considerable room for improvement for many of the States.

Figure 3.2 was modified to track an individual State (State A) across all the measures of diabetes care quality. For example, in Figure 3.3, State A is represented as a black diamond when it is statistically different from the national average or as a black-bordered white diamond when it is not statistically different from the national average.

The solid black diamond in Figure 3.3 shows that State A's rate compared to the national average is considered a statistically significant difference and probably is not attributable to just random variation that appears among the States and within each State. It may well represent some practice difference in State A that is not common nationally. What that difference is caused by cannot be deciphered from these data. Local insights and exploration are needed to understand underlying factors that might influence State A's rate of HbA1c testing.

The black-bordered white diamond in Figure 3.3 indicates that State A's rate compared to the national average is not considered a statistically significant difference and could as easily occur because of random variation as because of any specific practice by health care providers in State A. (Statistical significance and how it is determined is explained in Appendix E.)

In general (although not shown for all States, here), when individual States are tracked across diabetes measures, it becomes apparent that there is uneven performance across measures. No single State consistently ranks highest or lowest across all measures. Thus, all States have some room for improvement compared to national benchmarks and clinical guidelines for the treatment of people with diabetes.

Four States Compared to Benchmarks

To show how States may want to examine their own estimates, four States are compared to national, regional, and best-in-class State-average benchmarks, below. The national average and best-in-class average, when used together, summarize key information across the States and enable graphical comparisons to be simplified as bar charts (go to Figures 3.4.A, 3.4.B, 3.4.C, and 3.4.D).

In the bar charts, statistical significance from the national average is represented by a bold value at the top of the State bar. State values that are not bold are not statistically distinguishable from the national average. (For a discussion of how to consider statistical significance, go to the previous section on Understanding State Variation.) Also, statistical tests have been performed to compare the State average to the best-in-class average. These test results are presented in Table 2.1 in Module 2: Data.

The example States were chosen because each comes from a different region of the country and, at the same time, they represent a range of experiences — States performing above or below the national average on individual measures and States with longstanding and relatively new quality improvement programs in diabetes prevention and control. By using four States, it is easier to show the nuances of making comparisons with limited data. The graphical and statistical analysis below can be applied to any State collecting these measures through the BRFSS.

Each of these States is described in terms of the four diabetes care process measures explored generally above. Descriptions of their quality improvement activities can be found in the Module 4: Action.

Georgia: Figure 3.4.A reveals two facts about diabetes care in Georgia compared to national norms:
  • People with diabetes in Georgia are more likely than the national norm to report having had two or more HbA1c test in the past year. This is a statistically significant finding and suggests that Georgia health care professionals are aware of the importance of glycemic control, are testing their patients, and may be educating their patients about glycemic control. Whether they are successful in helping their patients control their blood glucose cannot be determined from these data. It is possible that Georgia physicians see a more advanced stage of diabetes among their patients and are therefore more concerned about their patients and are testing them more frequently. Special data collection would be necessary to evaluate the blood glucose levels of people with diabetes in the State and the effectiveness of the better-than-average HbA1c testing in Georgia.
  • Georgia does not differ statistically from the national average on the three process measures that relate to eye exams, foot exams, and influenza vaccination for adults with diabetes. The absence of a statistically significant difference has to be tempered with the fact that BRFSS samples for individual States are often quite small and probably too small to have the power to detect a difference of the size measured here, even if it exists. Thus, the higher rates of eye exams (statistically significant) are simply inconclusive. When compared to the more stringent benchmark of the best-in-class rates for these measures, however, Georgia is not one of the top 10 percent of States (go to Table 2.1). This is especially true for immunization against influenza where there is almost a 30-percentage-point difference between Georgia's rate and the best-in-class average (30 percent versus 59 percent).
Massachusetts: Figure 3.4.B reveals the following about Massachusetts compared to national norms:
  • Massachusetts appears to be close to the national average on all of the NHQR process measures for diabetes care quality, using the test of statistical significance. However, because of the small sample sizes of the BRFSS, consider the magnitude of the differences from the national average. Massachusetts' average for HbA1c testing two or more times per year is 8 percentage points higher than the average State and for flu immunizations its average is 7 points higher — notable differences. In both cases, however, the amount of variation among the States and within Massachusetts makes these statements equivocal, and the higher values could as likely be due to chance as to better performance.
  • Massachusetts is not among the top decile States when compared to best-in-class estimates. Massachusetts' lower values are statistically different (go to Table 2.1), which means the differences are unlikely to be chance occurrences compared to the top 10 percent of States. This suggests that Massachusetts may want to focus system-wide efforts on improving diabetes care quality.
Michigan: Figure 3.4.C shows that:
  • Michigan is similar to the national average across all States for three of four process measures. For HbA1c testing two or more times per year, and annual eye exams and foot exams, Michigan rates are not statistically different from the national average and the differences are within 5 percentage points.
  • Michigan is below the best-in-class average. Michigan is below the best-in-class average on all four measures and the differences are statistically significant (Table 2.1). For two measures in particular, the differences are large. Rates for HbA1c testing two or more times per year in Michigan are 27 percentage points lower than the best-in-class average and rates for influenza immunizations are 32 percentage points lower. This result calls for local study and possibly identifies an opportunity for Michigan to focus activities more widely on HbA1c testing and influenza vaccinations for people with diabetes.
Washington State: Figure 3.4.D shows the following for Washington:
  • Washington State performs better than the national average on influenza immunizations for people with diabetes. The Washington rate (49.6 percent) is 12 percentage points higher than the national average, and the difference is statistically significant. Furthermore, although Washington is not one of the top decile States (with values averaging 59 percent and ranging from 56 to 64 percent, Table 2.1), Washington did test as not statistically significant from the top decile, given the amount of variation among and within the States. Thus, Washington is doing relatively well in vaccinating its diabetes population. However, given that rates of immunization are low in all States, benefits are possible from activities aimed at improving immunization of people with diabetes against influenza.
  • Washington is similar to the national average on the other three process measures. Rates for the State are similar to the national average for HbA1c tests two or more times per year, and annual eye exams, and foot exams. Washington's rates are higher than, but within 5 percentage points, of the national average. Washington, however, is not among the top decile States when compared with best-in-class averages (go to Table 2.1).

Keep in mind that data on diabetes process measures provide a partial picture of diabetes care in each State. Outcome measures would be a valuable addition for understanding the impact of care processes in each State. The NHQR provided one of its diabetes outcome measures — avoidable admissions for uncontrolled, uncomplicated diabetes — by 14 States including the four example States reviewed above. These data are discussed in Step 2.

Return to Contents
Proceed to Next Section

Page last reviewed August 2008
Internet Citation: Module 3: Information—Interpreting State Estimates of Diabetes Quality: Diabetes Care Quality Improvement: A Resource Guide for State Action. August 2008. Agency for Healthcare Research and Quality, Rockville, MD. http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/diabguide/diabqguidemod3.html