Making the Health Care System Safer: Town Hall Panel Discussion

Third Annual Patient Safety Research Conference

Proceedings of the third annual patient safety research conference.

Town Hall Panel Discussion

Moderator:
Wilson Pace, M.D.
Director, American Academy of Family Physicians (AAFP) National Research Network

Kimberly Rask, M.D., Ph.D., Director, Emory Center for Health Outcomes and Quality
Carl Sirio, University of Pittsburgh
Victoria Fraser, Washington University

Dr. Pace: This is the time for interaction, questions, and answers. One thing we found in our reporting systems is that you really need both quantitative and qualitative data to be able to make real change. Can you describe briefly how your systems support both of those approaches or if they use the mixed method approach?

Dr. Rask: Just to give you an example, in our medication error analysis, we also use the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) to identify serious medication errors. We actually have done an analysis, both quantitatively and qualitatively to not only look at the data from a statistical standpoint but also to look at those root causes, what happened and what can you do to improve care and reduce those types of errors. You really need both types of analysis for whatever you're looking at. We have tried to do that in some of the areas on which we're focusing.

Male speaker: I would reiterate that from the perspective of specific events. One thing that we've added to this is also trying to understand organizations better. One thing that we've tried to do is understand the organizational factors that go into success or failure by bringing in people who have operations, research, and management experience.

Dr. Rask: Our system is definitely built using both quantitative and qualitative data. The quantitative data we showed looking at error rates and improvement in care according to guidelines is an example. Qualitatively we've also done a lot of analysis looking at the individual improvement plans and using that to get some gauge of how hospitals are actually able to use this information to change their processes. I think that's key. We've really seen a lot of success in being able to feed back that information to both individual hospitals and hospitals in the group to be able to say, "These are some of the things that your peer organizations are doing that are helping. Give it a try."

Female speaker: Dr. Sirio, I was curious, in order to get the success of the increased reporting of the A and B medication errors, what did you see the hospitals able to give back or that your collaborative was able to give back to the hospitals in terms of information that they used? With the burden of collecting, they must need to get something back from it. I was curious to know what you got from that. My other quick question for Dr. Fraser was about the safeguard system. Could you briefly describe how that's different from your online reporting?

Dr. Sirio: I think the first thing was that most of these errors, and this gets to some of the points you were making with respect to how different professionals view the data, most of these reported A and B errors came from pharmacists. In part, it was pharmacy department activity in most institutions that was not generalized to the medical and nursing staffs, which I think is a problem.

The second thing is many of the hospitals that were most active in reporting A and B errors used the data in much more real-time ways than we were actually able to feed it back on a quarterly basis. They actually instituted processes to create mechanisms to use the data for themselves. It was a coupling of intense pharmacy activity with internal processes that supported them using the data in real-time.

Dr. Fraser: Patient safety cards were different from the online event reporting system in a number of ways. The online event reporting system is a risk management system focused on harm and only accessible to hospital employees. Private physicians and university students can't use it. It's used predominantly by nurses. Other employees in general don't feel empowered or comfortable using it. The cards were 5 x 7 printed cards that were placed on the rounding carts, in the nurses' lounge, and in public areas of the ICUs with an education program. They focused on some categorical boxes describing types of events that might occur as well as a text box where people could write in a description. It was confidential, could be anonymous or the health care worker could self identify. It was really geared towards patient safety, so they were patient safety reports. We encouraged reporting near misses, no harm events, and any patient safety concerns or unsafe situations. It really motivated people to discuss their concerns as well as what they were witnessing. We initially found that in the very beginning people reported mostly on other people's events or other people's errors. As they became braver and identified themselves more often, they were also talking about their own events and their own involvement.

Female speaker: I want to thank the panel for a very nice discussion. I would like you to discuss the differences in rates using different systems. The State systems are getting very different rates than JCAHO [the Joint Commission on Accreditation of Healthcare Organizations]. So there is a quality issue in who are we to believe. There is also a concern that was touched on about having so many different systems collect the same data, especially when we're getting different rates. It's really a barrier to reporting. What should we be thinking about as a collective group with this sort of different silos or competition? What should we do? We're actually burdening some people.

Dr. Sirio: I have some strong feelings about that. I think you're right. We have created a bunch of silos that don't necessarily work in the big picture. If you look at universal underwriting for electrical sockets, you can have a Braun toaster or a GE toaster that you can plug into the wall and it will work. I think the time has come where we'll have to figure out a mechanism to start standard setting around systems. We know the barriers. We've heard about them now for a long time. We know that there are systems that are better than others at capturing things because of definitional issues. Standard setting is something that we will have to agree to at some point. It will require people to put down their knives and talk as opposed to just saying, "My system is better than your system."

Dr. Rask: I think we definitely need standards. I would also put that back on those of us in the research and development community and that we have not come to an agreement about standards. We need to have a good idea of what we would like to recommend before we start asking the end users to accept it.

Dr. Sirio: I think as a rule of thumb, most of us who work in these systems feel like none of them over-report errors. In general, we go to the higher rates.

Dr. Fraser: I'll just throw out a comment because I showed information about the JCAHO statistics with New York. It was just an illustration to show that a voluntary system even capturing those most serious events and a mandatory system will probably get more reports. We should focus on what you can do in terms of identifying ways to improve systems based on the data from the reports, both on a quantitative basis as well as qualitative. I also agree that there need to be some standards. I'm sure with the increase in technology and going towards interoperability there will be ways where perhaps different reporting systems will be able to interface and talk to each other and there will be some common language where we can share information in a common way.

Male speaker: I played with an analogy in my head as an editor who happens to see most of the articles that come out of this process and watch them try to get published. It's an interesting phenomenon in scientific research, at least in journals, that we're calling for registries of all clinical trials. One reason we're trying to do that is there are some estimates that for every positive trial there are 17 negative unpublished trials. We're trying to figure out the reality of science and knowledge. I see a similar problem going on here. I'll propose an alternative.

The alternative is not to report events but maybe to report the number of scientific endeavors that are actually asking questions and what those measures are for those questions and the incremental changes as a registry across hospital systems. That might be a better effort to try to improve the scientific validity of some of this. You did a beautiful job of convincing me this is wrong headed. This will just not go anywhere. This is an impossible path given that maybe some of your data improvement is because someone quit reporting bad data. You just don't know. We're really lacking a scientific methodology that has served medicine well that doesn't seem to be reflected in your reporting or in the efforts as I try to get these things published. It's a struggle. The scientific credibility of a lot of this is poor. I'm asking for your comments, maybe an alternative reporting system, scientific reporting system rather than outcome event reporting system. What are your thoughts?

Dr. Rask: Having done infection control for almost 20 years and benefiting from the NES system as well as other surveillance systems for identifying and tracking infections, my personal impression is it's probably not the best thing to try to get very rigorous about comparing rates and comparing data across systems. The surveillance methodologies vary so dramatically. If you're in a scientific setting, not doing performance improvement or operations with no budget, you can validate the data collection methods across centers and you can have rigorous definitions, do comparison of inter-rater reliability, and capture rates by a number of different systems. We're shooting ourselves in the foot repeatedly to say the Joint Commission data is different from the NYPORTS data and different from my data. The purpose of this data is not for direct comparison, but to be practically useful to help hospitals have some clue of the bad things that are going on that have been hidden, not deliberately but just because they're under the radar screen, so people can try to start fixing them. I think that from 30 years worth of infection control data, people have been able to use that data to develop and design interventions to lower infection rates.

Part of the challenge is we'll never be able to do randomized controlled trials of patient safety events. I cannot ethically randomize a patient not to get two identifiers, but your side of the room will get two identifiers every time you come in the hospital. Part of our limitation here is that we're dealing with very complex systems and things that can't ethically be randomized. I think we can have better scientific methods with grant funding to measure the effect of interventions, do case control studies and cohort studies so that we can determine causality and measure if interventions really work or if it's just regression to the mean.

Female speaker: Let me make a quick comment. No one is asking us to do randomized control trials because the methodology doesn't fit. You have to be able to mask everyone in the environment or don't do a randomized trial. We're asking for you to do it all the same way and do it in the context of a thought process. I like what you're saying. Maybe these reporting systems trigger activities but long before we had reporting for infectious diseases, we were doing studies trying to reduce infection. I'm not sure you're right there. We're not asking for randomized trials. We're asking that it all be done one way. Maybe the State of Iowa should do it one way and the State of New York should do it another way. Ask some of these questions in a more credible fashion and report the efforts for improvement, not what the outcomes are necessarily. No one has asked for randomized control trials. They shouldn't be done.

Dr. Rask: I have been asked to do those.

Dr. Pace: This discussion reminds me of something that Karl White often used to say, which is, "I'd rather be mostly right than precisely wrong." I think that's what we're about is trying to be mostly right.

Female speaker: From the sublime to the mundane, I wanted to ask Kimberly Rask about the administrative burden of the PHA program; what kind of employees are able to get back to the hospitals and give them improvement plans for some of their root cause analyses. Regarding the New York reporting system, I was wondering if there were any legal challenges to some of that data. In Chicago, there has been a challenge to peer review medical studies type of information.

Dr. Rask: In the PHA program, four field representatives divide the hospitals in their area. I believe each field representative is responsible for up to 40 hospitals. That person's full-time job is interfacing back and forth on many of these issues. I'm sure that they work more often with some hospitals than with others depending upon their needs. The other piece of that is that there has to be someone in the end user hospital who is connected to this program. Each hospital is required to designate a peer review contact who works with PHA on these programs. It's a little hard to separate out what specifically is related to this program because it's been designed to interface with all of the other quality improvement activities that the hospitals are required to do for JCAHO, CMS, et cetera. They've tried to use one data collection to give to two different sources and use it for two different purposes. PHA tries to keep as much of that analytic function internal so it's not an administrative burden to the end user hospital. Nevertheless, resources are required to do this and it's not something that can be done without an ongoing infusion of resources to support it.

Female speaker: With respect to peer review, we do have statutory peer review protection that reports from hospitals into our system cannot be disclosed, not in legal actions or under the Freedom of Information Act. We have been challenged, and it has been upheld. Until this point, we believe that those peer protections exist in the system.

Dr. Pace: We have time for one last question. Let's hope that that's the case. In Colorado, they used an interesting end around by going to Federal court, which then threw out the State law.

Dr. Carrowack: I'm Mark Carrowack from University Health System Consortium. I'm wondering, based on some of the problems I'm hearing with many of these reporting systems, if we might have the wrong mental model of how to organize a reporting system. We have raw data coming into a limited number of experts who try to categorize it precisely and then give meaning back to the people who entered the data. We have buckets of data coming in and teaspoonfuls of information coming back. I say this because I think I came to this with the wrong mental model as well. We've been running a reporting system in 20 academic medical centers for the last 3 years and had the same problems of people feeling beleaguered and overwhelmed by the amount of information coming in and the fact that the great majority of reports are going unanalyzed. Our system was designed to allow involvement of the natural leadership structure that is the unit nurse managers and the unit pharmacist. Much to our surprise, there was a lot more activity going on between the unit leaders and the reporters than there was between the reporters and the central supposed experts in patient safety. That dialog has actually been a rich source of potential improvements in patient safety. We've identified hundreds of examples where people have altered policies or workflows because of that conversation. I'm wondering if in our quest for precision along the lines of this we've sacrificed utility and avoided dealing with the natural leadership structure that exists in our organizations.

Dr. Sirio: I don't know if I'm going to answer the comment directly, but I'll use it as a jumping off point for my pulpit position. I think you're touching on something that's absolutely correct. We've created an elegant set of examples around the country of how to report but not necessarily an effective mechanism as to how you use the data. Your comment suggested it's almost accidental in terms of the relationships that exist within an institution to facilitate safety; the nursing manager talking to the pharmacy, the pharmacist talking to a doctor.

I think we need to step back even further than that and figure out a way to embed this as part of a natural flow of work. Right now, safety is out here; education is over here; and clinical delivery is over here. By the way, the payment structure is completely screwed up with respect to linking those three. Until we recognize that we're trying to graft onto a system that actually doesn't value the kind of stuff we're discussing, we're not going anywhere very fast. I think it's time to look at the way we accredit residencies, medical schools, and reward people in other professions around the education venue that will create a group of people who just wake up in the morning and don't think about safety because it's part of the work they do. I don't want to make comparisons to other industries. I just think in our industry, we are not at a place where we have understood that delivering a good piece of care, be it an operation or care in the ICU, is not disconnected from safety. They are the same. We think of them as two separate issues.

Dr. Pace: Thank you very much.


Previous SectionPrevious Section        Contents         Next Section

Current as of July 2005
Internet Citation: Making the Health Care System Safer: Town Hall Panel Discussion: Third Annual Patient Safety Research Conference. July 2005. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/news/events/other/ptsconf3/ptsconf3d.html