Making the Health Care System Safer: Town Hall Meeting: Translating Re

Third Annual Patient Safety Research Conference

Proceedings of the third annual patient safety research conference.

Town Hall Meeting

Translating Reporting Systems into Improvements: Where Does Patient Safety Go from Here?

Moderator:
Wilson Pace, M.D.
Director, American Academy of Family Physicians (AAFP) National Research Network

Deborah Queenan, Agency for Healthcare Research and Quality (AHRQ)
Kimberly Rask, M.D., Ph.D., Director, Emory Center for Health Outcomes and Quality
Ellen Flink, New York State Department of Health
Carl Sirio, University of Pittsburgh
Victoria Fraser, Washington University

Deborah Queenan

Welcome to the AHRQ living room. We've invited some of our close friends to come in, have some coffee, and exchange some words with us. Today, we'll be discussing patient safety in a very specific way. We'll be discussing translating systems into improvements. Where do we go from here?

We know we've all been struggling with reporting. This is an opportunity for us to have a conversation between the audience and our illustrious speakers, but also an opportunity for you to exchange among yourselves your ideas, your questions, your comments, and your puzzlements, whatever. This is open and free flowing. What will happen is each of our colleagues will give a 10 to 15 minute presentation and then we will have five people with roving microphones in the audience to pick up any comments or questions that you may have.

I'd like to begin by introducing our moderator, Wilson Pace. Many of you know Wilson. He is the Professor of Family Medicine at the University of Colorado and the Director of the AAFP National Research Network. He is one of our grantees. He is doing a demonstration project of improving patient safety in primary care.

Wilson Pace, M.D., Moderator

My job this morning is to set the stage for everyone else who will be presenting here. This is a series of talks about patient safety error reporting systems and how to move from the information in those systems to improvement. If you have been to as many of these meetings over the past 2 years as I have, I'm sure you've been in many meetings where the usefulness of error reporting systems has been debated. If this is like any other audience, there are some of you that consider them essential, not just for accreditation but also to make improvements. I'm sure there are others of you who consider them all the way down to worthless. We all know the issues with them. It is hard to get numerators let alone denominators. It's hard to understand whether you've actually made changes based on your error reporting systems. If any of you were in our taxonomy conference yesterday, you know that just deciding how you want to use the data is in and of itself a challenge.

I want to tell you an anecdote to start the morning, just to set the stage here. Though it may not relate to reporting systems, it's a very interesting anecdote around translation of research into practice. If you're in ambulatory care you may remember in September 2003, Wells, et al. published a very interesting randomized control trial that indicated that a blood test called the D-Dimer was a very useful screening test for deep venous thrombosis. In fact, it was sensitive enough that if it was negative, you could assume that this diagnosis, which is very hard to make in the ambulatory setting, could be excluded.

Within two months of that article being published, we started receiving reports in our system of problems related to the use of this test in ambulatory care. The first report came from a practice well over 100 miles from the nearest hospital in eastern Colorado. They started using this test a lot sooner than we usually hear about practices using new ideas. Within a few months, we actually received a number of these reports.

Our system sent out an alert to all of our practitioners January 15, 2004, approximately 4 months after this article came out; we had identified this as a rather significant safety concern. People were ordering these tests and then not responding to them as much as 3, 4, or 5 days later. In the meantime, if you actually have this disease, you have a life-threatening situation. None of the reports we obtained actually went on to any patient harm that we can discern, but it doesn't take any great leap of faith to understand that if this continued across the country, that someone would die in the interim. That's pretty much a given.

We moved quickly to look at intervention and found out by talking to our pathology friend that many labs can actually set up alerting systems independent of a critical value because this is not a critical value. They can transmit results immediately through all kinds of means, pagers or fax, or whatever it might be. We thought this is great. Within 6 months of this report coming out, we had started a small pilot project only to run into the usual problem of variables that we see everywhere. Physicians' and clinicians' variability of how they want to practice never ceases to amaze you.

This is an anecdote. I understand that. There isn't any chance that I could sway you one way or the other. I don't believe that anyone who walked into this room who felt that reporting systems were fantastic feels any differently about them because of this. If you thought they were worthless, I don't think this anecdote should change your mind, otherwise, you're not an evidence-based practitioner. This is just to set the stage and these four people will hopefully help us move a little further from anecdotes to science. That's what we're here to try to do this morning.

Our first speaker this morning is Kimberly Rask. She is the director of the Emory Center for Health Outcomes and Quality. She will be discussing a voluntary hospital system in the State of Georgia called the Partnership for Health and Accountability (PHA). She is the principal investigator (PI) of a demonstration project involving reporting systems.

Kimberly Rask, M.D., Ph.D.

Good morning and thank you for inviting me to participate in the discussion this morning. As Dr. Pace mentioned, I'll tell you a little bit about the voluntary reporting system that we have in Georgia. As Deborah mentioned earlier, the title of this session is "Translating Reporting Systems into Quality Improvement." A helpful way to think about the Georgia system, I think, is to flip that around a little and say the PHA model was built more to translate quality improvement into a reporting system. The Georgia Hospital Association worked with a group of 75 other constituent organizations to create PHA back in 1999, before the landmark Institute of Medicine (IOM) report came out. Hospitals, professional organizations, and State agencies came together with the Georgia Hospital Association to form PHA. They worked with the State legislators to put together a broad based voluntary patient safety system with a wider focus than the State mandated error reporting.

In 2001, AHRQ provided funding to PHA both to help support some of the PHA programs and also to add an evaluation component to the project. That's when I and the other Emory researchers became involved with the partnership. PHA is really a multifaceted program that has a variety of focuses. If you look at this picture of the homepage, you can get a sense of many of the different activities going on. I'll just highlight a few of the patient safety related activities that PHA undertakes. I encourage you to visit the Website or to go to the product café where there are several posters with more detail about PHA programs. You can also speak to several members of the PHA staff here at the meeting and they can give you a lot more information on the range of activities that it undertakes.

PHA was designed to support a very broad range of dissemination and implementation programs that focus on translating research and best practices into better clinical practice across the State of Georgia in all hospitals in the State of Georgia. In this sense, the role for error reporting is to motivate internal quality improvement in the hospital that reports the event. It's not focused on denominators. It's not focused as being a Statewide profiling tool. It's really used as a tool to promote quality improvement.

PHA was formed as part of a very broad based coalition. The first issue is how to turn that broad based coalition into an entity that can operationally do something day in and day out. The PHA structure created a hierarchy of committees. The membership of each of these committees includes representatives from the broad based constituencies relevant to the specific focus of each committee. There are over 100 hospital representatives and physicians serving on these different committees. The PHA staff is a dedicated to keeping things moving forward and be sure that programs are delivered.

In order to promote a non-punitive environment, a second key aspect of this program is confidentiality. All of the data and information that is reviewed by any of these committees are protected by the State peer review protection laws. The people who work on these committees or evaluate these data are always cognizant of the importance of staying within the framework of the peer review protection legislation to protect any of the information that's shared.

At this point, all 148 of the eligible hospitals in Georgia are actively participating in one or more of the PHA programs. PHA uses a variety of methods to draw in and engage hospitals in their processes. That's particularly significant in a State like Georgia that has a very heterogeneous mix of hospitals. Almost half of our hospitals are smaller than 100 beds. Our hospitals are about evenly distributed between rural and urban settings.

Four field representatives work with approximately 40 hospitals to which they have been assigned. They interact with them regularly and that includes at least two annual on-site visits. The PHA program also makes extensive use of teleconferences and Internet postings. This is very important because it helps encourage participation by many of our smaller rural hospitals that would not normally be able to send staff to meetings held in a central location.

I think there are three design elements to the PHA program that are particularly important to promoting its success:

  • The first is trying to leverage any reporting that hospitals are already doing to avoid redundancy and minimizing any new data collection. There is always a tension and a balance between precision and practicality. As a researcher, you can probably guess on which side I tend to fall. To the credit of the PHA staff, they're always very aware of what the burden is on hospitals and that the most important thing is to be able to have good response by a broader range of hospitals. We often have to balance new information or different data sources with something that can be sustained over the long-term.
  • The second important characteristic is really reinforcing the non-punitive environment. Hospitals benefit from reporting because by reporting events or performance to PHA they can receive quality improvement process support. PHA also will help individual hospitals identify and disseminate best practices, often from their peer organizations around the State, which makes it very relevant to them. PHA becomes a safe place for hospitals to share both their problems and their successes.
  • Lastly, another important piece is the role of telephone conferences and Web-casts to bring together a wide variety of people in busy hospital settings. Normally, these people wouldn't be able to drive a couple of hours to get together.

The four programs that I'll highlight this morning are:

  • A public reporting process.
  • A medication error reduction program.
  • A clinical guidelines program.
  • An event reporting system.
A Public Reporting Process

The first program that I'll mention related to patient safety is an annual report that PHA produces for the lay public. It's called Insights. Participation in this report is voluntary, but in the most recent year, 133, or 88 percent, of the eligible hospitals in Georgia included their data and their clinical data relative to Journal of Clinical Oncology (JCO) performance benchmarks.

A Medication Error Reduction Program

The second program is a comprehensive medication error reduction program. PHA provides hospitals with both a template and structures for analyzing internal data and for developing an improvement plan Some hospitals targeted a specific error and saw a reduction in that error rate. For those that did see a reduction, the average reduction was just over 35 percent. Other hospitals focused on increased reporting in a non-punitive environment and in fact increased reporting. Approximately two-thirds of hospitals in the two groups were successful.

I think a couple of things are encouraging about this. The first is that a quality improvement program focused on error reduction can be applied in a real world setting with real hospitals facing a variety of pressures that we all know are faced in the hospital environment and a majority of them will actually see improvements. From a perspective of voluntary systems, what I also find encouraging is that in this voluntary system, 140 out of 148 hospitals participated. I also think it's encouraging to look at the roughly 20 percent of participants who did not have the results they hoped for by participating but still reported it. I would always be nervous, as I'm sure any of the rest of you would, working in process improvement where everyone has 100 percent success. We know that doesn't happen. I think it's a sign that they have been able to create a non-punitive environment so that hospitals are comfortable sharing their bad news as well as their good news. The incentive to them for sharing the bad news is that they will get support from their field representatives, from the PHA program, to do a better job the next time.

A Clinical Guidelines Program

The third program focuses on clinical patient safety issues. Again, here we have almost 100 percent participation by hospitals. Most of the hospitals join a structured program that's focused on reducing the incidence of DBTs, pressure ulcers, or fall prevention. The remaining hospitals participate in an individualized program that they can use to target a specific problem area for their individual hospital.

What kind of evidence can we give you today that this voluntary system might make a difference? We can point to a couple of things that we find encouraging. First, the vast majority of hospitals will participate and most of the hospitals that do participate in the patient safety program see improvements. If you look, it's a measurable improvement. If you visit one of the posters in the product café that Dianne Green has, you'll note that she was able to show that participation in PHA was associated with larger clinical improvements. Most interestingly, I think, hospitals that were more active in participating in PHA saw the greatest improvements.

An Event Reporting System

Building on this foundation of patient safety related quality improvement activities, PHA is now rolling out a Web-based voluntary event reporting system. It was piloted over the last year. It went live about a month ago. In Georgia, hospitals are required to report unexpected deaths, rape, and wrong site surgery to the State regulatory agencies. PHA has created a Web tool that captures a broader range of events with categories E through I. It's important to understand that this system is to support process improvement in response to the events that are reported.

Because it's a voluntary system, it's been designed to provide benefits and incentives to hospitals in order to participate. Some of the incentives are anonymity and peer review protections. In addition, a hospital that reports an event is guided through a quality improvement process for analyzing the underlying causes and contributing factors of the event that they're reporting. They also have the opportunity to learn from both the successes and problems identified in peer institutions in their State.

Hospitals are asked to report the basic details about the event within 10 days of the event occurring and then return to the site after completing an internal root cause analysis (RCA). The online form uses mostly pull down menus, but there are text boxes available for hospitals to provide more specific details. In the top left-hand corner of the form each event is assigned a unique event ID. That ID is known only to the hospital that submitted the report. The number is not saved in PHA and no other entity has an ability to link the unique event to any particular institution. When the hospital comes back after doing its root causes analysis, it needs to know the event ID number to get back into that event and update the record.

If the event that's reported falls under the auspices of State mandated reporting, then the Web tool prompts for and produces the State reporting form that then can be printed and faxed to the regulatory agency. None of the identifying information required by the regulators is saved in the Web tool or by PHA.

When the hospital enters an event, they receive a pre-populated summary report based on the information they entered. The report can be used for their internal quality improvement purposes. They are also given structured prompts to help them develop improvement plans based upon the root causes that they identify. This event information then is also used by PHA staff to monitor trends and produce safety alerts or teleconferences on issues that they think may be relevant to other hospital settings.

Building a statewide voluntary system is not easy. It takes a very strong connection with the participating hospitals. It takes a lot of cheerleading, a lot of ongoing support, and perseverance and fortitude to hang in there for the long haul. I think the long-term focus is part of what's very important here because as any of us who work in quality improvement know, systematic change is not a quick process.

If we want to have long-term success, it's also critical that we design systems that can gather the information that we need but also minimize the reporting burden on the hospitals that we hope will continue participating in this process.

PHA is founded on the principle that successful system change requires active engagement. The philosophy underlying this program is that regulatory solutions have a limited ability to promote system change. They can be a catalyst for change, but I don't think that they support the actual implementation of that change. This voluntary system has tried to support change.

PHA hopes to facilitate patient safety improvement by:

  • Supporting individual hospitals in developing improvement plans based on the events that they identify.
  • Supporting individual hospitals to help collect and interpret their own internal data.
  • Performing a dissemination function to other hospitals.
  • Providing hospitals with a venue where they can learn from each other's problems as well as each others successes.

This has been a whirlwind tour, which hopefully makes some sense to you. I encourage you to speak to any of the PHA staff here or look at the information in the product cafe to learn a little more about it. Thank you for your attention.

Dr. Wilson Pace

Our next speaker is Ellen Flink with the New York State Department of Health. She has been involved in the design, implementation, refinement, and use of the New York State Patient Occurrence Reporting and Tracking System (NYPORTS) since its inception. She is the Director of the NYPORTS AHRQ demonstration project grant and is here to speak to us about issues involved with a mandatory State reporting system.

Ellen Flink

I'm happy to be here and thank you for allowing me to participate in this town hall meeting. I'm here to discuss New York State's mandatory reporting system. I would say that we've probably been a pioneer in terms of mandatory incident reporting systems. We've had a system in place for about 19 years now. I'll briefly go through some of the history of our incident reporting system. It is based in statute. The statute is referenced here. It really was in response to the medical malpractice crisis in the 1980s. Our first reporting system began in 1985. It was a paper driven, telephone-reporting system and we pretty much collected everything.

We realized that that was a burdensome system. We then moved on to another system called Patient Event Reporting and Tracking System (PERTS). That was based on an algorithm of treatment and patient harm. Unfortunately, it led to variability in terms of interpretation. The data that we got out of that system was not very useful. Back in 1995, we formed a work group of clinicians and a consumer representative to redesign the system. That's the system that we have today, NYPORTS. We rolled it out statewide after pilot testing it. It began in 1998. It's based on a short form reporting format for every event that is put into the system and then a root cause analysis framework for the most serious events. It's an interactive Web-based system to make it easier to report events.

Because NYPORTS is a mandatory reporting system, clear definitions are a vital part of the system. Although you think everything is clear, we have this nice includes/excludes list format that we think clearly identifies those events that are supposed to be reported and specifically excludes events that we are not required to report. We pilot tested the "includes/excludes" list extensively early on. We made changes and revisions based on feedback from hospitals participating in the pilot testing. We developed a manual to illustrate some examples and clarifications. Even with all of that, there are still some gray areas and we still get calls almost every day from hospitals saying, "I have this case, is it reportable? Does it meet the criteria?" We still struggle a little bit with that.

We have several volunteer subcommittees that work on our system. We have limited resources in the Department of Health. We've actually recruited volunteer subcommittees to help us do the work. We have a refinement subcommittee that looks at the includes/excludes list and we make changes to that.

In terms of the types of information that we collect, we have certain types of less serious events for which we require just the short form. The short form collects demographic information about an occurrence and a brief narrative about what happened. It's really meant for facilities to actually internally track and trend these occurrences, perhaps do a root causes analysis on a group of occurrences, and use them as a focus for quality improvement activities.

The more serious types of events such as wrong site surgery, unexpected deaths, or retained foreign bodies require facilities to report on the short form and then to conduct a root cause analysis, and submit that information within 30 days of submission of the short form.

We collect approximately 30,000 reports in our system annually. Four to five percent of those are these serious types of events for which an individual investigation is required.

How do we use this information? It's great to collect all of this data, but if you can't use the information, it doesn't matter whether it's a voluntary system or a mandatory system. It's important to provide feedback to facilities and for the State to be able to utilize this information and provide that information back to hospitals as well.

Hospitals use the data. There is a comparative reporting function within the system. It allows hospitals to actually look at themselves and compare their experience to regional hospitals in the State, their peer groups, or on a statewide basis. They can use this reporting function to see where they fit within the rest of the State. They can also export their data into their own database so that they can manipulate it in ways that the system itself cannot do.

Hospitals can use the information for their own quality improvement initiatives and various facilities have done so. They can use it for tracking and trending information and presentation to boards or other meetings in terms of looking at the information over time and perhaps identifying areas where they could start quality improvement initiatives to improve the outcomes in their hospital.

The State Department of Health uses the data as well. We have a news and alert newsletter that we issue on a quarterly basis. It's similar to a sentinel event alert that JCAHO issues. We have issued advisories to hospitals. There are some serious occurrences that we identify in the system that we can really only see on a statewide basis. There might be just a few serious events statewide that an individual hospital wouldn't know about. We have a public report that we issue annually. Because this system is built within a regulatory framework, we use it internally to share with other offices and bureaus within the department. It is part of our overall surveillance activities and hospital profiling. We share information at statewide and regional meetings.

When we first started to design the system, a Web-based interactive system was actually a state-of-the-art system back in 1998. Obviously, electronic reporting facilitates the use of data. It makes reporting easier. Hospitals can query the database and the State can find trends that wouldn't be identifiable at an individual facility level. We have a number of graphic formats that we can use to generate reports. We're currently actually looking at redesigning the reporting module to increase the different types of reports that the system can generate, both for facility use as well as for the Department.

In terms of projects that we've undertaken based on NYPORTS data, we've done an analysis of unexpected deaths in the system. We have several hundred of these types of reports and we've actually broken them down by subcategories. We've drilled down on the root causes analyses and identified common themes. We're in the process of writing up those findings now. We've also undertaken a medication error related project and actually have a poster out in the product cafe that summarizes the information from that study. We have also written up the findings for publication in the AHRQ/DoD Advances in Patient Safety.

In terms of how facilities have used the information and have undertaken projects based on the NYPORTS data, several facilities have looked at efforts to reduce Pulmonary emboli (PEs) and lower extremity deep vein thrombi (DVTs) in hospitalized patients. They've targeted improvements in medication use based on the information in the system. They've reduced intravascular catheter related pneumothoraces, and reduced returns to the operating room. There are various categories in the system for which a facility can see where they fit and perhaps target that area to drill down and initiate quality improvements in their facility.

As director of the AHRQ-funded demonstration projects, we use the NYPORTS system for one of 16 reporting demonstrations where we've identified areas of study in the NYPORTS system. We've designed demonstration projects with groups of hospitals to implement evidence-based interventions and test the effectiveness of those interventions in changing physician practice and then sharing successful intervention strategies with the hospitals in New York State and nationally.

Again, in order for a system to be useful, the data must be effectively disseminated. These are just a few venues that we've selected to disseminate the information:

  • We have a very large statewide work group that meets on a quarterly basis. This is a first level of presenting findings and different types of information from the system. It's attended by approximately 25 percent of the hospitals in New York State.
  • We have regional forums.
  • We have presented findings at professional meetings.
  • We have a bulletin board attached to our system where we can post new information that facilities can download to make it easier when facilities go into the system.
  • We've conducted numerous educational video conferences and had a patient safety conference a couple of years ago. We plan to have another one in the spring of 2005.
  • We've also developed a patient safety award program. Many hospitals that submit proposals for what they've done to improve patient safety in their facilities have used NYPORTS' data as a basis for these projects.

Just to give a sense in terms of the numbers of reports that we collect—not really to look at the reports and the numbers specifically but the ability to have a large amount of data to analyze: We have almost three-fold the number of events in the NYPORTS system for a 6-year period as JCAHO's sentinel events, which are from a voluntary national reporting system for a longer period of time. These are serious events.

We also, as Kim was discussing, have confidentiality protections in the system. It prevents disclosure of any reports for the purposes of discovery. However, it doesn't protect the individual practitioner if that information is brought to quality assurance meetings. The reports themselves are protected. The Department of Health of course, at its discretion can investigate any incident and take subsequent action if we identify areas of violation.

In terms of some of the challenges that we still face, completeness of reporting is an issue. We have denominator data. We have an administrative database and we can use that as denominator data with NYPORTS as a numerator, but there are still issues of completeness of reporting. There is no real way to crosscheck. There is no gold standard to make sure that we have a complete database. It's very resource intensive to support the system at the department level and you really need a dedicated individual at a facility level to support the system. We're currently looking at the quality and the accuracy of the root cause analysis (RCA) reports that we get in the system. We've actually designed an evaluation tool and are currently doing some training around the State to increase the quality of the information in the RCA so we can do more meaningful analysis.

We continue to do education and training annually and if we have specific targeted areas, we do it more frequently. That's always a challenge because again with limited resources, we have a big State and it's a large area to cover. We have tried to do video conferencing when we can to cut down on the travel. Many times, we have to go back to NYPORTS 101 to get back to the basics. Again, we strive for quality improvement in monitoring and evaluation of the system. We realize that it's not a static system. It has to evolve. We're looking at new and different ways to improve the system electronically and new and different ways to look at the information reported.

In terms of lessons learned, we realize that information must be meaningful and useful to end users. We feel it's important to obtain buy-in early in the development process. That's what we did here. Again, we started out with a small work group in order to get the work done. We had a work group of clinicians and a consumer representative. The Web-based system allows facilities to access data and produce reports and this is an important part of the system. Ongoing education and training is critical and reduces variability. Because of staff turnover in hospitals, there is always that new person that will come in and be unfamiliar with the system.

It's important to field test the system design and to hear the feedback so that meaningful changes and improvements can occur. Again, it's important to have clear definitions of reporting criteria to reduce the variability in reporting and to make the system as useful as it can be. I think we have been on a journey. We continue this journey and it is on the journey where you learn the true lessons. Thank you.

Dr. Wilson Pace

Our next speaker is Carl Sirio, an Associate Professor of Medicine and Pharmacy at the University of Pittsburgh. He's involved with the Pittsburgh Regional Healthcare Initiative and the PI for their demonstration project in western Pennsylvania.

Carl Sirio, M.D.

Back in 1997, we created a framework that ultimately included 42 hospitals in collaboration with the University of Pittsburgh, the RAND Corporation, Perdue, and the framing organization of the Pittsburgh Regional Healthcare Initiative. At that point, it probably was unique. Since then, other efforts around the country, and I think we've heard about some of them this morning, have come to fruition with respect to trying to organize data to actually improve the safety of care.

In the next 10 or 15 minutes, I'll touch on a couple of points, including the shared learning model and the framework that we used to build our assumptions. I think as you hear the assumptions we made, they will be generalizable to most of you. I'll also mention some relatively targeted results for the purposes of this talk. We had many goals, but I'm really going to focus on medication error and hospital acquired infections, lessons learned and problems. Lastly, I'll give some sense of where I think the future lies and where our greatest pitfalls are.

Our model starts with the environmental factors, which are things with which we're all familiar. They're the milieu in which we practice and health care institutions exist, the educational framework, the accreditation structure, laws around liability, safety laws, and mandatory reporting systems. Within that big framework in which we work, institutions have responded variably with respect to the safety culture that they bring to the task. In particular, we have found, I think many of you will confirm, that the safety culture as embodied by the CEO, his minions and the leadership of the medical staff is probably the most important factor that determines success or failure in patient safety. We have done a lot of case study work that have confirmed both through qualitative and quantitative work how important this really is.

The safety infrastructure includes the tools and the accoutrements that all of you have or don't have in your institutions to actually take data and make that ultimately foundational learning upon which we can build improvement.

We have three major assumptions:

  • Reporting systems are in fact the foundation upon which this work is built.
  • We need to link the reporting system to an ability to share these data, and what we've tried to do in a large region is share across institutions.
  • Problem-solving systems can then begin to evolve that don't necessarily have to be re-invented every time the same type of problem develops.

Our region is Allegheny County, which includes Pittsburgh. It's a highly medicalized community, like most of yours. We have many hospitals. These are competitive institutions. We have two major systems, a whole host of smaller ones, and several independent hospitals fitting into a larger geographical context of about 150 miles by about 150 miles from which we drew our hospital participants. These hospitals came together, like I said, starting in 1997 and agreed to several major goals. Two of the most important goals were the elimination of medication error and the elimination of hospital-acquired infection.

Fundamentally, we have been on that march now for several years. Have we gotten there? Obviously not. Through a series of very beneficial partnerships with the United States Pharmacopeia (USP) and the Centers for Disease Control and Prevention (CDC) we created a series of mechanisms to report infections in ways that haven't been done before. As many of you know, the MedMARx® system of the USP has been utilized as one of the more credible voluntary systems using the National Certification Corporation (NCC) criteria that I think one of the earlier presenters showed you. This provided a robust data stream with respect to not only the errors but also the potential to identify causality. We'll discuss that shortly.

As many of you know, the national infection surveillance system of the CDC is a relatively constrained system in that to participate you have to meet a series of very specific criteria. There are over 300 hospitals and through special arrangements with the CDC, we were able to utilize components of it. We didn't use all of it. That allowed us to have a well validated system for capturing infections and allow the CDC to begin to test how they could roll out a system like this in all hospitals around the country if they so chose.

Assuming that we have fostered the right kind of culture with respect to medication error, if in fact, we are doing our job, both the number and the types of errors should increase over time. It should certainly increase over the first couple of years of reporting until we start to plug the dike and start to solve problems. We focused initially, with respect to infection, on central lines associated with infections in the intensive care unit (ICU), blood borne infections that have a high cost both in terms of extended hospital length of stay, cost, and most importantly, lives lost. With respect to clavs, given the fact that there are some assumptions that the infection control community makes in terms of capturing all of the infections, we should actually see, if we were successful in linking our model, a decrease over time.

Let me mention some of the medication error reporting data we collected from the third quarter of 2001 through December of 2003. Starting in the third quarter of 2001, we had about 700 reported errors in the quarter. At the end of the last year, we had 5,000 voluntary reports per quarter in our hospitals, reflecting a significant increase over time. In the first quarter we had very few A and B errors, the type of errors that don't reach a patient. However, if you're going to do successful prevention, it became very clear to us very early on that you actually have to attack those. The pharmacy community in particular in the region agreed that it was time despite the burden to begin to collect those data.

We started with 250 of those A and B errors, up to 3,000 per quarter at end of 2003. The errors that don't reach the patents were 0.3 percent of all errors reported in the beginning and were 58 percent by the end of last year. That suggests that there was a fairly clear recognition on the part of institutions that catching errors, recording them, and beginning to understand them before they ever got to a patient was where in fact the action probably was and not in the high-end, high visibility stuff where you're actually harming people and potentially killing them.

With respect to infections, we also proved our hypothesis. Over this period in fact infections by hospital in the ICU related to central lines dropped significantly. Both the medication error and the infection data are highly significant sets of results. We were able to do statistical analysis for 29 of the 35 hospitals that were represented in this particular study. We found a mixed message. For 22 of the 29 hospitals where we were able to do robust analyses because of sample size, in fact, rates were decreasing. We found that the other seven hospitals actually had rates that were either bouncing around or actually increased.

I won't get into the causality of that in the formal remarks but I think it should provide some fruitful discussion in terms of the why's in the discussion period. Technically, we did some of the same things you've heard, so I'll really gloss over this. We created a data-coordinating center under the auspices of the AHRQ funding with a central repository at the University of Pittsburgh to collect the data centrally, analyze it by hospital, and then redistribute it.

One of the major findings and one of the major problems that I hear embedded in all of the remarks both yesterday and this morning is the relatively remote nature of the data from the problem. One thing that I've become convinced of and we have seen in our work over time is that the closer that these data are to real-time, and ultimately the same day, the more likely it is that we'll actually affect change. We created the infrastructure on the shared learning to begin to look at what we could do with the data. We created regional advisory groups in medication error; they were divided by topical areas. We'll discuss that shortly. Infections were by geographic region. We broke that 150 square mile area into discrete areas where small groups would come together and decide on the priority settings with respect to shared learning.

Specifically, one task I was asked to discuss is how we use these data. I heard a couple of presentations yesterday where I got a confirmation of some of the other things people have found around the country. One of the biggest places where errors occur is in the use and utilization of narcotics. We started small. We took an issue with fentanyl, which is a high potency narcotic, now delivered transcutaneously. We began to look at what errors were associated with the use of fentanyl in the chronic care interface between the hospital and the outpatient setting where people would come in with patches and wind up getting an oral or IV narcotic on top of it, a common problem.

We used this relatively small population to pilot test the effectiveness of sharing information across 40-plus hospitals. We then moved on to a much more significant, in terms of population, problem, and that is patient-controlled analgesia. It affects many patients in the post-operative period, in particular where there are all kinds of problems with both the technology of the pumps, the delivery systems, the walkouts, the understanding of nurses in how to set them up, and a most particular problem with respect to the multiplicity in the way these factors impact ordering narcotics.

Another way we used these data was, and this was before the Joint Commission came out with their requirements on abbreviations, to look at how we could actually eliminate unsafe abbreviations. With respect to the infection side of the equation, we went down the path of recognizing and knowing that there are clear evidence-based ways that you can eliminate infection in the ICU related to central lines. By following stringent techniques from skin preparation to gowning to changing the lines, when to, when not to, there should be no infections. In fact, we spent about 2 years working with hospitals in getting processes in place to allow them to move on this path of eliminating infections.

Compare that to our huge problem with resistant staph in Pittsburgh. We were unable to come to consensus with respect to how to frame this problem given the controversy that exists both in infection control and in infectious diseases.

I want to stop here and assume that there may be questions with respect to what this all means in the discussion period. The last two speakers cut to the issue of causality.

Can we attribute achievements in Pittsburgh with these data systems and shared learning to the work we did, or was it just part of the environment? There are a couple of things that at least tease us to think that what we have been doing may have some impact. The first is that this work started before patient safety ever became a national hot buzzword. It's certainly possible that hospitals and institutions were moving in this direction but it's our conjecture that the environment was such and the milieu we created was such that we have had an impact. That's further potentially confirmed by the fact that if you look at our data when compared to national data, especially in medication error that doesn't reach the patient, you see an increase in the national USP data with respect to reporting. You do not necessarily see anything close to the magnitude in errors that we've seen focusing on the near misses.

Similarly, with respect to the CDC data, we have shown a dramatic decrease in central line associated infections over the last couple of years. The CDC since 2000 shows no such change nationally. Again, I can't tell you with certainty the effect of what we've done but I think there is some clear sense that putting together a regional structure can improve the outcomes for patients.

We have found that we still need to attend to many issues and problems. There are significant variations in the way hospitals generate these data leading to questions about reporting, comparisons, and references. There are clear limitations in the systems. I think one thing we've learned on the medication error side is that the very detailed nature of the reporting system starting with the error and working all the way to the cause is outstanding for the purposes of a constrained system. For the purposes of reporting the thousands of errors, and we know we're just hitting the tip of the iceberg, it remains a very cumbersome, labor-intensive, and costly system that I don't think is sustainable over the long-term. Flip that to what the CDC has provided us, a great opportunity to use a nationally validated system that doesn't have enough information because the infections are captured obviously sometime, often days, after the incident occurred. More importantly in this system, there is no ability to attribute causality. I think that we need to figure out using these two examples at the poles some way to balance data collection burden and cost with real-time needs around causality.

Information sharing has been a real problem. I don't know whether we'll have a different experience in Pennsylvania now that we're up and running with a State mandated system of reporting severe incidents over the last 90 days. It just started and whether our experience will be similar or different from New York's, I know from a voluntary perspective, it has been very difficult to keep the herd aligned and the cows moving in the same direction with respect to sharing of data and the willingness of institutions to break down barriers around competitiveness.

I'll move forward to suggest that one major lesson that we have learned is that this is primarily about leadership; leadership at the CEO level of the institution and leadership of the medical staff. I think we've also learned that there are some significant differences between voluntary and compulsory data and that a blending of the two is probably where we need to land. In and of itself, a voluntary system doesn't work. We need to look at how we created the governing structure around a regional initiative, create a better set of arguments for CEOs around the business case of safety, which I think is a failing nationally at this point, and deal with the issue of inter-institutional competition.

Last Friday, I had the opportunity to meet with David Brailer, the new Health Information Technology Czar under Tommy Thompson. Dr Brailer was in Pittsburgh visiting. He went to medical school with one of my colleagues. I knew him from when I was a resident. He decided to spend the day with us. I think there were 3 things that I took from his comments.

  • We need to do a much better job with respect to creating interoperability between systems. For instance, right now, MedMARx® and USP don't talk to each other. There is no need for them to talk, but in a safety structure, our systems are just too hodgepodge and too ad hoc.
  • The multiplicity of reporting requirements from State requirements to in-house requirements is just too disconnected at this point.
  • The reporting goals need to be better coordinated.

With that, I'll leave you with a conclusion. We're not there yet. I think the fundamental problem is we have a series of disconnects. As I look at this audience, there are well-intentioned individuals, myself included, who really want to do the right thing. What we don't have is the right conversation going on between the delivery system in terms of the executives making decisions, the payment structure, and one thing I have not heard articulated very much over the last couple of years, the education system. I think until we fundamentally reform and overlap the safety agenda with the educational system of health care professionals starting with physicians, we will be in this business for a very long time and not necessarily see the kinds of outcomes we hope to see.

Dr. Wilson Pace

The next speaker is Victoria Fraser, a professor of medicine at Washington University and the PI on their demonstration project for the BJC Healthcare System. She will speak on areas to report.

Victoria Fraser, M.D.

As many of you know, the primary purpose of reporting is to facilitate learning from our own experiences. This ensures that all responsible parties are also aware of all of the major hazards. It's also useful to monitor progress in prevention of errors. In many cases, external reporting also allows these lessons to be shared outside of our own institutions and systems. State run mandatory systems also hold hospitals accountable to some degree.

Dr. Lee and others in the room have identified the most successful characteristics of effective reporting systems. I won't go over all of these. Clearly, I think we need to continue to focus on non-punitive, confidential, independent systems linked to expert analysis that report and analyze data in a timely fashion. Systems that are oriented and very responsive to the need of patients and health care workers have the best chance of providing useful and meaningful data.

My task really was to discuss the barriers. I think, unfortunately, there are still many barriers that exist, the greatest of which for many people is fear. We're afraid of being blamed for making a mistake. We're afraid of lawsuits. We're afraid of repercussions for talking about errors, reprimands from superiors, and negative performance appraisals. One of my colleagues actually said that she was going over her performance appraisal and found two incident reports in her performance appraisal from a supposedly very patient safety oriented facility, which made it very hard for her to think that it wasn't punitive. We're afraid of being labeled as incompetent by our peers and ourselves. We're afraid of the negative consequences for the patients.

Ralph Waldo Emerson said, "Fear defeats more people than any other one thing in the world." If we're going to succeed, we have to think about the fundamental thing that we as parents do for all of our children, which is try to assure them that we will protect them and not let anything bad happen to them. We need to focus on that area for our patients as well.

Some of the barriers for health care workers' reporting are still very real. We have significant staff shortages and our staffs are overworked and underpaid, which mean that reporting is just another thing that may keep them away from the bedside. In many cases, the reporting system is cumbersome or too much work, may take too long, and may be perceived as a hassle and not helpful.

There are clearly still barriers in terms of people's understanding of what errors are. I think we continued to see yesterday that many of us disagree about what certain errors are, both at the institutional level and at the individual level. We disagree over the definitions. Sometimes there is lack of recognition that an error has even happened because our error detection systems are too simplistic and people are uncertain about exactly what to report, whether it has touched the patient or didn't touch the patient and did or didn't cause harm.

Another barrier is that we're not as good as we should be at providing timely feedback and interventions to prevent the errors from recurring. In many systems, we don't provide enough feedback about every individual report. The in-box may actually be overwhelming the out-box. We don't often give positive feedback for correct behavior. We don't often provide rewards or incentives for reporting. Often, because of the huge number of errors or near-miss events, we have inadequate dissemination of data from these patient safety and error reporting systems. Other barriers are cultural or individual and relate to health care workers' feelings that it's hopeless, that it won't lead to improvement, that the system is too cumbersome. They don't have enough information on how to report. They're unclear about whose responsibility it is to report or they are disinterested or lack motivation to report.

I think one very big barrier is financial. You've heard from other speakers about how huge and hard these systems may be to maintain. Many places don't have effective reporting systems because they don't have the resources or the finances to build them. Building the system isn't enough. Then you have to support, maintain, and evolve the system so that it is effective in the future. Many places don't have the resources to provide effective analysis or to disseminate results fully. This work doesn't come free and I don't think we'll succeed long-term with volunteer efforts. I think we are still woefully short of staff to implement patient safety intervention. If we spend all of our resources and all of our time doing surveillance and counting errors, then we aren't effectively using our resources to intervene and prevent errors.

These are some of our own results from our cultural surveys looking at over 5,000 health care workers with about a 60 percent response rate in a 12-hospital system in the Midwest. We had the following results for the main question, "Which of the following factors has kept you or might keep you from reporting a medical error?"

  • 47 percent did not know whether the event was important enough to report.
  • 42 percent felt it's more trouble to report than it is to just fix the problem.
  • 40 percent were concerned that the report won't be kept confidential.
  • 39 percent did not want to get themselves or anyone else in trouble.

Even in our system where we've had a huge amount of resources and people invested in this project, we don't have 100 percent of our employees feeling comfortable and safe reporting or discussing errors.

I think there are significant differences between worker groups. If you look at these questions about how people feel in terms of reporting and break it out into bosses, licensed professionals, technicians or entry-level workers, there are significant differences in terms of people's level of comfort and confidence. In response to the statement, "If people find out I made a mistake, I will be disciplined," only 10 percent of management feel like they'll be disciplined but 30 percent of the technical staff feel that they will be disciplined. If we're going to make progress in patient safety, we have to overcome these individual and departmental differences in how people feel in terms of reporting and dealing with errors.

We found significant differences between management, professional staff, and technicians when we talked about which of the following factors has kept them or might keep them from reporting a medical error:

  • Not knowing whether the event was important enough to report.
  • Feeling it is more trouble to report than just fix the problem.
  • Concern that the report wouldn't be kept confidential.
  • Not wanting to get anyone in trouble.

I think there is still a sharp and a blunt end phenomenon here in terms of the relative risk that people feel and their closeness to the patient.

We did a survey of physicians and Tom Gouger spearheaded this both in St. Louis, Missouri, at the University of Washington in Seattle and with multiple medicine and surgical physicians in Canada. For this, I think we were all pretty reassured that most of the doctors felt that to improve patient safety, physicians should report serious errors to their hospital or health care organization, which is great. That big difference is between the United States and Canada, where litigation is very different. If you go down the scale and say how important should it be for physicians to report minor errors, then there is a significant decline in the number of physicians in the U.S. and Canada that agreed with that. If you go down to near misses, there is even a further decline in terms of how important physicians thought it was to report near misses. I think we still have a long way to go to help people understand the benefits of reporting, particularly near misses and things that don't reach the patient so that we can fix them before they ever get to the patient.

Another thing is a little telling. When we asked the question, "Which of the following if any have you used to report errors in your hospital or health care organization in order to improve patient safety," quality mismanagement or completing an incident report on the bottom row were really the most common mechanism for reporting errors. Significantly fewer respondents reported to a patient safety program. We're still in the risk management arm of data collection and data reporting. I think not enough places have moved towards having unique patient safety programs that have a very different focus than risk management and maybe are interested in collecting and analyzing and acting on very different kinds of data than risk management.

We developed a simplified reporting system for ICUs. Our online Web-based risk management reporting system is focused on events that reach the patient and cause harm. It's pretty simple and straightforward to use but is not accessible to physicians because they're not hospital employees. They're usually private or university employees. We implemented a card-based system. Without a huge amount of investment or initiative, there was a significant increase in reporting in the medical ICU, the CT ICU, and the SICU compared to the online preexisting system. We also found that over time, there was increasing confidence and comfort in this reporting system because the proportion of people who put their names on the cards and who really wanted to talk and interact about the events they were reporting also increased over time.

In order for us to get past the barriers, it really requires more significant organization and cultural changes. I think the process really is just beginning and we can start to feel like the ship is moving slightly. We need specific systems to be maintained long-term. We can't build all of these structures and then have them die or run out of gas in the next year or two. We have to develop more systems, incentives, and rewards for reporting as well as more rapid systems for analyzing, reporting, and feeding back data to frontline workers, which requires additional resources long-term and infrastructure to maintain and improve these systems as well as additional ongoing research.

This cannot be a phase in health care. It cannot go out in the next year or two to be replaced with whatever the next passing phase is. We still need multiple levels of change. Medical, legal, and risk management issues still need to be addressed in a major way. We need improved stakeholder communication and collaboration and ongoing financial investments by State, hospitals, and the Federal government to help build and maintain patient safety databases and then to analyze and use this data. We need additional studies of interventions to prevent and manage errors to understand which are the most effective and how they can be implemented most cost effectively. I think we need additional long-term research commitments with AHRQ leadership along with funding from other Federal agencies to maximize this kind of work.


Previous SectionPrevious Section        Contents         Next Section

Current as of July 2005
Internet Citation: Making the Health Care System Safer: Town Hall Meeting: Translating Re: Third Annual Patient Safety Research Conference. July 2005. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/news/events/other/ptsconf3/ptsconf3c.html