Appendix D: Patient Safety in Hospitals In 2004: Toward Understanding Variation Across States
By Susan Redman, MSPH, Elizabeth Stranges, MS, Rosanna Coffey, PhD, Marguerite Barrett, MS, Roxanne Andrews, PhD, Ernest Moy, MPH, Jeff Brady, MPH
The emergence of patient safety as a contemporary health issue has resulted in the development and use of measures, such as AHRQ's Patient Safety Indicators (PSI), to track progress over time in improving patient safety. National PSI rates have been made available annually in the National Healthcare Quality Report (NHQR), and state-level PSIs will be released in the 2007 edition of the NHQR State Snapshots available on the Web in early 2008. The purpose of this analysis is to explore the extent to which differences across states in PSI scores can be explained and to describe what might account for those differences. The results are intended to help HCUP Partners and AHRQ respond to inquiries about state-level PSI rate variation, which can be substantial.
The analysis was performed on the nine State Snapshot PSIs which will be released in the 2007 edition of the NHQR State Snapshots; the state PSI rates were obtained by applying AHRQ Quality Indicator software to the HCUP State Inpatient Databases (SID) dataset.1 The PSIs for up to 37 states were compared against 58 state-level factors that can be broadly categorized as (a) state policies that are generally intended to affect the quality of health care delivered in the state; (b) hospital characteristics; (c) coding practices; and (d) other characteristics such as population and health system characteristics. To the extent possible, we included factors in the external environment and factors inside hospitals that were conceptually related to medical error, quality improvement, or specific patient safety events. Separate correlations of each PSI and each state-specific factor were conducted (i.e., for each PSI, the analyses statistically examined the relationship between the state rates and a particular state-specific factor).
Overall, we found that only about one in five correlations between the State Snapshot PSIs and potential explanatory factors were statistically significant. The number of statistically significant associations for the nine individual PSIs range widely from 0 to 21 out of a possible 60 associations, including dummy variables. In addition, the nature of the significant PSI/factor associations is mixed in that some have plausible explanations and others do not. In the latter case, these may be artifacts of other phenomenon or the result of chance statistical significance, given that nearly 550 correlation analyses were performed (i.e., 9 PSIs times 60 independent variables).
Although there is no pattern to which associations are statistically significant or their direction at the individual PSI or factor level, a somewhat different picture is revealed when factors are aggregated. Among factor categories, the most consistent analysis results are those pertaining to the role of coding in explaining variation in state-level PSIs. Taken together, the coding factors accounted for one-third (33 percent) of statistically significant associations between State Snapshot PSIs and explanatory factors. The findings for this category are strengthened by the fact that associations were consistently positive in direction (i.e., increases in factor values were associated with higher PSI rates). The average number of diagnosis fields filled for discharges in 2004 yielded the largest number of statistically significant associations, suggesting that higher PSI rates sometimes may reflect greater attention to coding, not just worse health outcomes.
The analysis of State Snapshot PSIs identified few state-level factors that showed a consistent pattern of association with the nine state-level PSI rates. We suspect that many of the factors that should influence patient safety indicators are too new in development or too remote from where safety problems occur to find strong associations in this state-level analysis. For example, state programs that proactively disseminated information to the public or providers were relatively new in the early 2000s. Also, medical errors and their prevention occur at the provider, not the state, level. With this simple and aggregated analysis, we are not surprised to find few conclusive results.
As expected, the strongest result was coding practices. In a similar analysis of state-level PSIs and Prevention Quality Indicators (PQIs) conducted in 2003 using 2000 data, one type of coding practice (use of E codes) had a strong, consistent relationship with PSIs. In the current analysis, the average number of diagnosis fields used was an important factor; more fields were associated with higher PSI rates. This suggests that states that are leading the way to safer medical practice should expand the number of diagnosis codes reported and collected. This would make room for reporting of medical errors for complex clinical patients who already have numerous conditions coded on their discharge records. More and better reporting about patient safety events is essential to learn about and make improvements in the quality of care.
One reassuring result is the lack of consistent statistical relations between patient and hospital characteristics and safety measures. This supports our earlier findings and the conventional wisdom that errors are unintentional, random events that can affect any patient and that all hospitals need to improve safety.
1 For further detail, see Methods Applying AHRQ Quality Indicators to Healthcare Cost and Utilization Project (HCUP) Data for the Fifth (2007) National Healthcare Quality Report, HCUP Methods Series Report #2007-06.
Return to Document