Page 1 of 1

Healthcare Cost and Utilization Project (HCUP)

Appendix B, National Healthcare Quality Report, 2008

By Rosanna Coffey, Ph.D., Marguerite Barrett, M.S., Bob Houchens, Ph.D., Jeff Brady, M.D., M.P.H., Ernest Moy, M.D., M.P.H., Karen Ho, M.H.S., Roxanne Andrews, Ph.D.

October 22, 2008

This section discusses methods for applying the Agency for Healthcare Research and Quality Quality Indicators to the Healthcare Cost and Utilization Project hospital discharge data for several measures in the 2008 National Healthcare Quality Report.

The Agency for Healthcare Research and Quality (AHRQ) Quality Indicators (QIs) were applied to the HCUP hospital discharge data for several measures in the National Healthcare Quality Report (NHQR). The AHRQ QIs are measures of quality associated with processes of care that occurred in an outpatient or an inpatient setting. The QIs rely solely on hospital inpatient administrative data and, for this reason, are screens for examining quality that may indicate the need for more indepth studies. The AHRQ QIs used for the NHQR include four sets of measures:

  • Prevention Quality Indicators (PQIs)—or ambulatory-care-sensitive conditions—identify hospital admissions that evidence suggests could have been avoided, at least in part, through high-quality outpatient care (AHRQ, 2007).
  • Inpatient Quality Indicators (IQIs) reflect quality of care inside hospitals and include measures of utilization of procedures for which there are questions of overuse, underuse, or misuse (AHRQ, 2007).
  • Patient Safety Indicators (PSIs) reflect quality of care inside hospitals, by focusing on surgical complications and other iatrogenic events (AHRQ, 2007).
  • Pediatric Quality Indicators (PDIs) reflect quality of care inside hospitals and identify potentially avoidable hospitalizations among children (AHRQ, 2006).

The QI measures selected for the NHQR are described in Table 1.

The Healthcare Cost and Utilization Project (HCUP) is a family of health care databases and related software tools and products developed through a Federal-State-industry partnership and sponsored by AHRQ. HCUP databases bring together the data collection efforts of State data organizations, hospital associations, private data organizations, and the Federal Government to create a national information resource of discharge-level health care data. HCUP includes the largest collection of longitudinal hospital care data in the United States, with all-payer, encounter-level information beginning in 1988. These databases enable research on a broad range of health policy issues, including cost and quality of health services, medical practice patterns, access to health care programs, and outcomes of treatments at the national, State, and local market levels.

Two HCUP discharge datasets were used for the NHQR:

  • The HCUP Nationwide Inpatient Sample (NIS), a nationally stratified sample of hospitals (with all of their discharges) from States that contribute data to the NIS dataset (37 States in the 2005 NIS).
  • The HCUP State Inpatient Databases (SID), a census of hospitals (with all of their discharges) from 37 participating States in 2005.

For 2005, the NIS contains roughly 8.0 million discharges from more than 1,000 hospitals and the SID contains about 32.4 million discharges (approximately 83 percent of the 39.2 million discharges in the United States). Data from 1994, 1997, and 2000-2005 were used in this report. Limited reporting was done at the State-specific level. For the list of data organizations that contribute to the HCUP databases, go to Table 2.

To apply the AHRQ Quality Indicators to HCUP hospital discharge data for the NHQR, several steps were taken: (1) QI software review and modification, (2) acquisition of population-based data, (3) general preparation of HCUP data, and (4) identification of statistical methods. These steps, described briefly below, are presented in greater detail in the Technical Specifications for HCUP Measures in the 2008 National Healthcare Quality Report and the National Healthcare Disparities Report (Barrett, Houchens, Coffey, et al., 2008), available from AHRQ on request.

  1. QI Software Review and Modification. For this report, we started with the following QI software versions: PQI Version 3.1, IQI Version 3.1, PSI Version 3.1, and PDI Version 3.1. Because each of these software modules was developed for State- and hospital-level rates, rather than national rates, some changes to the QI calculations were necessary. (For details, see Barrett, Houchens, Coffey, et al., 2008.) We also added two indicators particularly relevant to the structure of the NHQR for patients age 65 years and over: immunization-preventable influenza and adult asthma admissions.
  2. Acquisition of Population-Based Data. The next step was to acquire data for the numerator and denominator populations for the QIs. A QI is a measure of an event that occurs in a hospital, requiring a numerator count of the event of interest and a denominator count of the population (within the hospital or within the geographic area) to which the event relates.

    For the numerator counts of the AHRQ QIs, we used the HCUP NIS to create national estimates and used the SID for State-level estimates. For the denominator counts, we identified two sources for all reporting categories and for all adjustment categories listed in the HCUP-based tables. The HCUP data were used for State- and national-level discharge denominator counts for QIs that related to providers. Population ZIP Code-level counts from Claritas (a vendor that compiles and adds value to the U.S. Bureau of Census data) were used for denominator counts for QIs that related to geographic areas. Claritas uses intracensus methods to estimate household and demographic statistics for geographic areas (Claritas, Inc., 2005). We also used the Claritas population data for risk adjustment by age and gender for the area-based QIs.
  3. Preparation of HCUP Data. Next, the HCUP SID was modified to create analytic files consistent with the NIS and consistent across States.
    • Subset to Community Hospitals. For the SID, we selected community1 hospitals and eliminated rehabilitation hospitals.
    • Weight for Missing Hospitals. Because some statewide data organizations do not report data for all community hospitals in the State, we weighted hospitals in the SID to the State's universe of hospitals in the American Hospital Association Annual Survey Database based on hospital characteristics.
    • Weight for Missing Quarters. Discharges from hospitals operating for the entire year but not contributing data for one or more quarters were weighted up to annual estimates for that institution in the SID.

In addition, the following issues had to be resolved in the NIS and SID before applying the QI algorithms:

  • Impute for Missing Characteristics. For missing age, gender, ZIP Code, and payer data that occurred on a small proportion of discharge records, we used a "hot deck" imputation method (which draws donors from strata of similar hospitals and patients) to assign values while preserving the variance within the data.
  • Assign Additional Measures for Reporting. We assigned median household income using the Claritas ZIP Code data linked to patient's ZIP Code in the SID. For the 2008 NHQR, we added reporting by the National Center for Health Statistics (NCHS) county-level classification of urban-rural location, which includes gradations of metropolitan, micropolitan, and non-core counties by population size.

The four AHRQ QI program modules were applied to the prepared SID data using all available diagnoses and procedures reported by each State. The QI indicators from the SID were then linked to the corresponding discharge records on the NIS. During this linkage, any additional information for reporting described above was also added to the NIS.

  1. Statistical Methods. Identification of statistical issues included the following: age-gender adjustment for all QIs; severity/comorbidity adjustment for the discharge-based IQIs, PSIs, and PDIs; and derivation of standard errors and appropriate hypothesis tests.
    • Age-Gender Adjustment. For the PQIs and area-based IQIs, PSIs, and PDIs, age-gender adjustments were made for age and gender differences across population subgroups and were based on methods of direct standardization (Fleiss, 1973). Age was categorized into 18 five-year increments (described in Table 3, Age Groupings for Risk Adjustment). Although the AHRQ QI software uses a similar approach to adjust the area-based QIs, we relied on direct standardization because of the additional reporting categories and population denominators required in the NHQR.
    • Age, Gender, Severity, and Comorbidity Adjustment. For the discharge-based PSIs, adjustments were made for age, gender, age-gender interaction, DRG cluster, and comorbidity using the regression-based standardization that is part of the AHRQ PSI software. For the discharge-based IQIs, adjustments were made for age, gender, age-gender interaction, and 3M™ All Patient Refined Diagnosis Related Groups (APR-DRGs) risk of mortality or severity score using the regression-based standardization that is part of the AHRQ IQI software.

For the discharge-based PDIs, adjustments were made for age, gender, DRG and MDC clusters, and comorbidity using the regression-based standardization that is part of the AHRQ PDI software. Measure-specific stratification by risk group, clinical category, and procedure type was also applied.

  • Standard Errors and Hypothesis Tests. Standard error calculations for the rates were based on the HCUP report entitled Calculating Nationwide Inpatient Sample (NIS) Variances (Houchens, et al., 2005). There is no sampling error associated with Claritas census population counts; therefore, appropriate statistics were obtained through the Statistical Analysis System (SAS) procedure called PROC SURVEYMEANS. QI estimates were included in the NHQR if they reached a threshold defined by a relative standard error less than 30% and at least 10 unweighted cases in the denominator. Estimates that did not satisfy these criteria were set to missing. Statistical calculations are explained in Appendix A to this report and in Barrett, Houchens, Coffey, et al. (2008).

1 Community hospitals are defined by the AHA as "non-Federal, short-term, general, and other specialty hospitals, excluding hospital units of institutions." Specialty hospitals included among community hospitals are obstetrics-gynecology, ear-nose-throat, short-term rehabilitation, orthopedic, and pediatric institutions. Also included are public hospitals and academic medical centers. Excluded are short-term rehabilitation hospitals (beginning with 1998 HCUP data), long-term hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities.


Calculating Costs Associated With Quality Indicators

The HCUP databases include information on total hospital charges. Using HCUP hospital-level cost-to-charge ratios based on hospital accounting reports from the Centers for Medicare and Medicaid Services,2 total charges are converted to costs. Costs will tend to reflect the actual costs of production, while charges represent what the hospital billed for the stay. Hospital charges reflect the amount the hospital charged for the entire hospital stay and does not include professional (physician) fees.

Total national costs associated with potentially avoidable hospitalizations are calculated for three PQI composites—overall, acute, and chronic conditions. The total cost is the product of the number of stays for each PQI composite and the mean cost for each PQI composite. This approach compensates for stays for which charges (and thus estimated costs) are not available.

Total cost savings from reducing avoidable hospitalizations are estimated based on the risk-adjusted rates for the top 10 percent of states. The adjusted rates for the best performers are averaged. The potential reduction in cases is the expected number of U.S. cases based on the best performer average subtracted from the actual number of U.S. cases. The total cost savings is the product of the average national cost per case and the potential reduction in cases. An example using PQI 14, Uncontrolled Diabetes, is provided in Appendix B.


2 HCUP Cost-to-Charge Ratio Files (CCR). Healthcare Cost and Utilization Project (HCUP). 1997-2005. U.S. Agency for Healthcare Research and Quality, Rockville, MD. www.hcup-us.ahrq.gov/db/state/costtocharge.jsp.


Caveats

Some caution should be used in interpreting the AHRQ QI statistics presented in this report. Some caveats relate to the how the QIs were applied, some relate to ICD-9-CM coding changes and inter-State differences in data collection, and others are more general issues.

Rehabilitation Hospitals: These hospitals are excluded from the 2000-2005 NIS but included in the 1994 and 1997 NIS because of the change in the NIS sampling strategy (beginning in the 1998 NIS). Patients treated in rehabilitation hospitals tend to have lower mortality rates and longer lengths of stay than patients in other community hospitals, and the completeness of reporting for rehabilitation hospitals is very uneven across the States. The elimination of rehabilitation hospitals in 2000-2005 may affect trends in the QIs however, based on previous analyses, the effect is likely small since only 3 percent of community hospitals are involved.

ICD-9-CM Coding Changes: A number of the AHRQ QIs are based on diagnoses and procedures for which ICD-9-CM coding has generally become more specific over the period of this study. Essentially all of the changes occur between 1994 and 1997. Thus, some 1994 estimates may not be comparable to the later estimates. These inconsistencies are noted in the footnotes of the NHQR tables with information on the affected ICD-9-CM code and direction of the bias when it can be determined.

Data Collection Differences Among States: Organizations that collect statewide data generally collect data using the Uniform Billing format (UB-92) and, for earlier years, the Uniform Hospital Discharge Data Set (UHDDS) format. However, not every statewide data organization collects all data elements nor codes them the same way. For the NHQR, uneven availability of a few data elements underlie some estimates, as noted next.

Data Elements for Exclusions: Three data elements required for certain QIs were not available in every State: "secondary procedure day," "admission type" (elective, urgent, newborn, and emergency), and "present on admission." We modified the AHRQ QI software in instances where these data elements are used to exclude specific cases from the QI measures:

  • The PSIs and PDIs that use secondary procedure day were modified to calculate indicators without considering the timing of procedures.
  • For QIs that use admission type "elective" and "newborn," we imputed the missing admission type using available information. For all States except California, an admission type of "elective" was assigned if the DRG did not indicate trauma, delivery, or newborn. An admission type of "newborn" was assigned if the DRG indicated a newborn. For California, which did not provide any information on admission type, information on scheduled admissions was used to identify elective admissions and DRGs were used to identify newborn admissions.
  • For QIs that use present on admission (POA), we modified the AHRQ QI software to calculate indicators without considering whether the condition was present at admission.

Number of Clinical Fields: Another data collection issue relates to the number of fields that statewide data organizations permit for reporting patients' diagnoses and procedures during the hospitalization and whether they specifically require coding of external cause-of-injury (E codes). The SID for different States contain as few as 6 or as many as 30 fields for reporting diagnoses and procedures, as shown in Table 4. The more fields used, the more quality-related events that can be captured in the statewide databases. However, in an earlier analysis, even for States with 30 diagnosis fields available in the year 2000, 95 percent of their discharge records captured all of patients' diagnoses in 10 to 13 data elements. For States with 30 procedure fields available, 95 percent of records captured all of patients' procedures in 5 fields. Thus, limited numbers of fields available for reporting diagnoses and procedures are unlikely to have much effect on results, because all statewide data organizations participating in HCUP allow at least 9 diagnoses and 6 procedures. We decided not to artificially truncate the diagnosis and procedure fields used for the NHQR analyses, so that the full richness of the databases would be used.

E Codes: Another issue relates to external cause-of-injury reporting. Eight of the 27 Patient Safety Indicators and three of the Pediatric Quality Indicators use E code data to help identify complications of care or to exclude cases (e.g., poisonings, self-inflicted injury, trauma) from numerators and denominators, as shown in Table 5. Although E codes in the AHRQ PSI and PDI software have been augmented wherever possible with the related non-E codes in the ICD-9-CM system, E codes are still included in some AHRQ PSI and PDI definitions. Uneven capture of these data has the potential of affecting rates and should be kept in mind when judging the level of these events.

Effects of Adding New States to the NIS over Time: Over time HCUP has expanded through the participation of additional statewide data organizations. Because each yearly NIS is a sample of hospitals from the States participating in that year (and weighted to the universe of community hospitals nationally), potential exists for different practice patterns across States to influence national measures related to clinical practice over time.

The table below lists the States that were added to HCUP between the years used in this report.

PeriodStates
1994AZ, CA, CO, CT, FL, IL, IA, KS, MD, MA, NJ, NY, OR, PA, SC, WA, WI,
1995—1997Added GA, HI, MO, TN, UT
1998—2000Added KY, ME, NC, TX, VA, WV
2001Added MI, MN, NE, RI, VT
2002Added NV, OH, SD (AZ data not available)
2003Added AZ, IN, NH (ME data not available)
2004Added AR (PA data not available)
2005Added OK (VA data not available)


For the first NHQR, we calculated QI rates using two methods to test this hypothesis, first with data from the full set of States in HCUP in 2000 and second with data from the set of States in HCUP in all three years (1994, 1997, and 2000), where that subset of States was re-weighted to obtain national estimates. For most QIs, the results differed very little. These results are presented in detail in the Technical Specifications for HCUP Measures in the National Healthcare Quality Report and the National Healthcare Disparities Report (Barrett, Houchens, Coffey, et al., 2003), available from AHRQ on request.

Variation among State QI Rates. Variation in State rates can be caused by many factors, including differences in practice patterns, underlying disease prevalence, health behaviors, access to health insurance, income levels of the population, demographics, spending on health services, supply of health care resources, coding conventions, and so on. To understand some of the variation in State rates, we analyzed the 2001 State rates in relation to these types of factors. Appendix C shows for each Prevention Quality Indicator (PQI) included in the NHQR, the analyses performed and the result in terms of whether the factors (with each tested separately because of the limited number of observations) were positively, negatively, or not significantly related to the QIs.

In a subsequent analysis, we investigated sources of variation in Patient Safety Indicator (PSI) rates across States using 2004 data. Appendix D contains the executive summary from the report, Patient Safety in Hospitals in 2004: Toward Understanding Variation Across States. The analysis concluded there were few state factors (such as state policy, hospital characteristics, coding practices, and socio-demographics) with strong patterns of association to state-level variation in the nine PSI rates studied. The strongest result occurred with coding practices ? the number of diagnosis fields coded. Only one in five correlations between the PSIs and state factors were statistically significant, although there is generally no pattern.

These analyses are intended to help readers understand some of the external factors that may be driving some of the State differences in PQI and PSI rates.

References

Agency for Healthcare Research and Quality. AHRQ Quality Indicators-Guide to Prevention Quality Indicators: Hospital Admission for Ambulatory Care Sensitive Conditions, Version 3.1. Rockville, MD: Agency for Healthcare Research and Quality, 2007.

Agency for Healthcare Research and Quality. AHRQ Quality Indicators-Guide to Inpatient Quality Indicators: Quality of Care in Hospitals-Volume, Mortality, and Utilization, Version 3.1. Rockville, MD: Agency for Healthcare Research and Quality, 2007.

Agency for Healthcare Research and Quality. AHRQ Quality Indicators-Guide to Patient Safety Indicators, Version 3.1. Rockville, MD: Agency for Healthcare Research and Quality, 2007.

Agency for Healthcare Research and Quality. Measures of Pediatric Health Care Quality Based on Hospital Administrative Data: The Pediatric Quality Indicators. Rockville, MD: Agency for Healthcare Research and Quality, 2006.

Barrett ML, Houchens R, Coffey RM, Andrews R, Moles E. Technical Specifications for HCUP Measures in the Sixth National Healthcare Quality Report and the National Healthcare Disparities Report. Washington, DC: Thomson Healthcare, 2008.

Barrett ML, Houchens R, Coffey RM, Kelley E, Andrews R, Moy E, Kosiak B, Remus D. Technical Specifications for HCUP Measures in the National Healthcare Quality Report and the National Healthcare Disparities Report. HCUP Contract Task 290-00-004 Deliverable #185. Washington, DC: The Medstat Group, Inc., January 2003.

Claritas, Inc. The Claritas Demographic Update Methodology, April 2005.

Fleiss JL. Statistical Methods for Rates and Proportions. New York: Wiley, 1973.

Houchens R, Elixhauser A. Final Report on Calculating Nationwide Inpatient Sample (NIS) Variances, 2001. HCUP Methods Series Report #2003-2. Online June 2005 (revised June 6, 2005). U.S. Agency for Healthcare Research and Quality. Available: http://www.hcup-us.ahrq.gov/reports/methods.jsp.

Raetzman S, Stranges E, Coffey RM, Barrett ML, Andrews R, Moy E, Brady J. Patient Safety in Hospitals in 2004 : Toward Understanding Variation Across States. HCUP Methods Series Report #2008-2. Online March 2008. U.S. Agency for Healthcare Research and Quality. Available: http://www.hcup-us.ahrq.gov/reports/methods.jsp.

Return to Document

Current as of March 2009
Internet Citation: Healthcare Cost and Utilization Project (HCUP): Appendix B, National Healthcare Quality Report, 2008. March 2009. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/nhqrdr/nhqr08/methods/hcupqr.html