Transcript: Webinar on AHRQ Quality Indicators Toolkit for Hospitals

How To Improve Performance on the AHRQ Inpatient Quality and Patient Safety Indicators

The Agency for Healthcare Research and Quality (AHRQ) developed a toolkit to help hospitals understand the Quality Indicators (QIs). To orient users to the toolkit, AHRQ held a Web seminar on February 15, 2012. This is the transcript of the webinar.

Webinar on AHRQ Quality Indicators Toolkit for Hospitals (audio only) [ mp3 audio file- 1 h 27m 11s]


Lise Rybowski: Good afternoon, and welcome to a webinar sponsored by the Agency for Healthcare Research and Quality about the new AHRQ Quality Indicators Toolkit for hospitals. We're so pleased that so many people across the country were able to join us today to learn about this comprehensive resource for hospitals looking to improve the quality and safety of inpatient care. I'm joined today by four speakers. First, Joanna Jiang from the Agency, who is the project officer for this toolkit. Also, we have Donna Farley from RAND, who oversaw the development of the toolkit in close collaboration with UHC [University HealthSystem Consortium]; Peter Hussey from RAND, who led the evaluation of the toolkit; and finally, Ellen Robinson, who implemented the toolkit at Harborview Medical Center in Washington.

My name is Lise Rybowski, and I'll be the moderator for today's event. Joanna is going to kick off the presentations today by commenting on the purpose of the toolkit and AHRQ's motivation for developing this new resource. Donna is then going to provide a quick overview of the components of the toolkit and explain how RAND and UHC collaborated to develop this resource along with input from several key stakeholders from around the country. Then Peter is going to discuss the findings from a field test with six hospitals, which was done to ensure that the tools were usable and useful in the real world. Finally, Donna and Ellen will discuss three sections of the toolkit to illustrate how the toolkit is designed and how one hospital applied the tools to identify and address quality problems.

Before we proceed with the presentations, I wanted to make sure that everyone is clear on where to find the toolkit. It's on the AHRQ site at http://www.ahrq.gov/qual/qitoolkit. If you know anyone who wanted to attend this webinar but couldn't, please let them know that we will be repeating this event in the near future. Also, a video recording will be posted on the toolkit page. And if you want to learn more about components of the toolkit or specific tools, watch for announcements of audio interviews we'll be adding to the toolkit page in the spring and summer. And now I'll turn this over to Joanna Jiang. Joanna, thanks for joining us today.

Joanna Jiang: Thank you, Lise. Hello, everybody. I'm Joanna Jiang, and on behalf of AHRQ, I would like to welcome and thank you for participating in this webinar. It has long been one of AHRQ's missions to improve quality of care through supporting research and development of measures and tools. Many of you have heard about the AHRQ Quality Indicators, which have been around for more than a decade and have been increasingly adopted into public reporting and value-based purchasing. There are public reports on hospital performance in 26 States that used AHRQ Quality Indicators. In addition, CMS, the Centers for Medicare & Medicaid Services, reports 10 AHRQ Quality Indicators on Hospital Compare beginning in October 2011, which include 7 Patient Safety Indicators and 3 Inpatient Quality Indicators, as listed on this slide.

While public reporting and value-based purchasing can provide a motivation for change and improvement, they do not inherently help hospitals to improve. Many hospitals may lack the knowledge and skills to implement quality improvement. There's a need for a set of standardized yet adaptable tools that can help hospitals incorporate these quality indicators into their ongoing internal assessment and improvement efforts. So to meet this critical need, we developed the AHRQ Quality Indicators toolkit, which you're going to hear more about. The work was done through a 2-year contract awarded to the RAND Corporation, which collaborated with the University Health Systems Consortium, UHC, and I'm real pleased today we have the project staff from RAND to introduce this toolkit to you and a representative from the Harborview Medical Center to share with you their experience of using this toolkit. I hope you will enjoy the presentations. With that, let me turn it back to Lise.

Lise Rybowski: Thank you, Joanna, for providing that context for the toolkit. I'm now going to turn the presentation over to Donna, who directed the project at RAND. Donna?

Donna Farley: Thank you, Lise. I'm pleased to be here with everyone, and we're very glad to see the kind of participation that we're getting today and hope that we will be able to interact with you as much as possible given the limits of time and the technology to at least respond to a few of your questions as we go through this presentation.

First of all, a brief introduction to the toolkit. I'm hoping that most of you have had an opportunity already to get to the Web site page where the toolkit is located on the AHRQ Web site. But it's basically a set of tools that are designed so that hospitals can use them to help improve their performance in quality and safety and with a particular focus on using the AHRQ Quality Indicators as measures of their performance before and afterward, with the goal of hopefully improving performance on those indicators. The indicators that we are including in this work are the Inpatient Quality Indicators, or the IQIs, and the Patient Safety Indicators, or the PSIs. The tools are targeted to a wide range of hospitals, ranging from small, independent hospitals to larger ones that may be affiliated with systems, recognizing that hospitals will vary widely in their needs for information and support as they go through quality improvement activities.

As Joanna mentioned, the toolkit was developed through a task order through the AHRQ ACTION [Accelerating Change and Transformation in Organizations and Networks] Program, in which RAND partnered with UHC. AHRQ over time will then continue to support the toolkit and there may be a need for additional tools or modification of the tools that are already there.

How can hospitals use the toolkit? As I said, it's applicable for hospitals with differing knowledge skills and needs. We see it as a resource inventory from which you, as hospital people, can select the tools that you feel are most useful for you. Many hospitals already have well-established quality improvement systems, and you may want to expand those with some of the additional tools available here. Others may want to make heavier use of them. I particularly want to highlight that there are different audiences for each tool. Although many of them are targeted to the quality officer, there are also roles for the financial officer, programmers, and others in the hospital.

The Quality Indicators, as I indicated, are the Inpatient Quality Indicators, which consist of 28 indicators in four sets, as you see here, and the Patient Safety Indicators, which screen for adverse events. All of these except for one of the sets under the IQIs are expressed as rates per some denominator of population.

Our development process—and I will go into a little bit more detail on some of the indicators as we go down through the rest of the presentation—the development process consisted of four basic steps. First was developing the toolkit; second, field testing the alpha toolkit, working with some hospitals in the field and, as part of that, performing an evaluation to learn from their improvement experiences and also to get their feedback on the usability of the toolkit and the effects on the QI values for their hospitals. The result of that field test and evaluation then fed back into a final step of revising and finalizing the toolkit for dissemination. As you know, it was indeed published on the AHRQ Web site last month. I'm going to talk very briefly about the first two steps of the development and going into the field test, and then we'll turn it over to Peter Hussey to talk through what we learned from that.

We established a set of principles in developing the toolkit to guide its development process. We started with a literature review so that we could gather information on what was already known and what issues looked like it was important for us to address as we identified tools. Then we developed an outline of the toolkit based on the steps of a standard quality improvement process. The final step was identifying and developing the specific tools for each step. We worked with a technical advisory panel that had six members of various skills and perspectives, including a number of people from hospitals, quality improvement expertise, and relevant research skills, and these folks provided some very helpful and insightful guidance throughout the toolkit development on both design principles and the content of the tools.

The principles that guided our tool development were parsimony and targeting the most important factors, providing tools that offer the most value to a range of hospitals, establishing the content of the tools so that they are readily accessible, and also enabling hospitals to assess the effectiveness of their actions. And this really drove us to the structure that you see here, which consists of A through G, seven different sections of the toolkit, starting with readiness to change and then going to working with the QIs and analyzing your hospital's rates and experiences with these indicators, and then moving into the quality improvement process, starting with identifying priorities for quality improvement based on what you learned in analyzing your QI rates, and then into implementation methods, and finally, monitoring progress and sustainability. So inside of each of these sections, there are one to several tools, which are organized according to those sections.

I want to highlight in particular the road map that was developed to serve as a navigational guide through the toolkit. If you look at the presentation of the tools on the AHRQ Web site, the first page, the introduction page, is part of that road map. It gives an overview to the approach that we used and how to think about these tools, how to work with them as you consider the possibility of drawing upon some of these capabilities. Then the second page, when you click on the line that identifies "downloading specific tools," takes you to a chart, which is the road map. And in that road map, we summarize for each tool the action step that you would be taking for a quality improvement process and with that, the brief description of the tool or tools that could be used for that step. Third, the key audiences, and then finally, the position with the lead role responsibility. On the Web site are links to each of the tools right in that road map, and we encourage you to use that as a way to move through searching for the tools that you might find that you need in your particular work.

Our field test design was a quality improvement collaborative that UHC conducted with 11 hospitals participating, and at the same time RAND performed an evaluation to learn from their experiences as they went through that field test and implemented their quality improvement activities. So I'm going to turn it over to Peter Hussey at this point to talk about some of what we learned from that evaluation. Peter?

Peter Hussey: Thank you, Donna. As part of the field test, we conducted an evaluation with six of the hospitals. These six represented different types of hospitals serving different populations in different parts of the country. And we worked with them to learn what strategies they used for implementing quality improvement projects, what their experiences were with those quality improvement efforts, and how useful and usable our toolkit was, the alpha version of the toolkit, in implementing those projects. We tracked these hospitals' experiences over the course of the entire project. We conducted a series of interviews with them, and with three of the six hospitals, we conducted a full-day site visit at the end of the project.

Overall, we heard positive feedback about the utility of the toolkit for quality improvement by the hospitals. As expected, we found that the hospitals did vary widely in how many and which of the tools they chose to apply. We found that hospitals, in some cases, had tools that were similar to the ones that we had developed. They still thought it was useful to compare and contrast and mix different parts of the tools. And we also found, as expected, that hospitals had very different priorities and experiences and so used the toolkit in very different ways. One thing that we heard was that the toolkit was most useful in one area, which was achieving staff consensus on areas such as the extent of quality gaps within the hospital and evidence-based practices that could be applied or implemented to address those quality gaps. What we heard was that in some cases it was useful to have a toolkit provided by a trusted third party to bring into those conversations, and that could be useful in helping to build consensus.


As Donna mentioned, the toolkit was designed to be used with either the IQIs or the PSIs. What we found was that all of the six hospitals chose to address PSIs, and the indicators that they focused on are listed here. What we heard was that their choice to focus on PSIs was driven largely by the focus on these indicators through other programs such as public reporting programs. So there was already a priority or there was another reason to focus on these PSIs. In some cases we found that hospitals, in fact, were using indicators that were not part of the AHRQ QIs, and the toolkit was usable for them as well. The reason for them doing that was that those indicators were in use, again, in other programs.

There were three themes that emerged from the interviews that I want to highlight here that fed into our refinement of the toolkit. The first was that hospitals need to trust their data. The second is the challenge of priority setting. And the third was the importance of simplicity. I'll discuss each of those three in a little bit more detail.

The first thing that we heard from every hospital was that a very important step in this was trusting your rates on the indicators before you proceed with the quality improvement project. The AHRQ QIs are based on administrative data, so coding and documentation processes have a very big impact on the indicators. In some cases, what shows up as an event in the PSI or the IQI on further examination turns out not to be a true quality event but rather a data or documentation and coding issue. The quote we heard from one hospital representative was, "If we're running reports over coding information, we need to be mindful of coding issues before engaging medical staff. We need to be sure that we're not wasting their time." What this reflects is that they are not only concerned about using the quality improvement staff resources wisely, there's also the potential here that the entire project could be undermined or derailed if you go to medical staff and present to them rates that turn out to be not true events. In some cases, there might only be one opportunity to engage, and once you've lost credibility that can derail the entire project.

The second issue is that priority setting is very challenging. We heard that hospitals believe it's a great benefit to be able to look at data and explore priority setting but that hospitals may not have time to do this because they face external constraints. We heard this consistently. Again, there are many different demands on hospitals' time, and that drives decisions about priority setting.

And the third thing was the importance of simplicity. We heard loud and clear that users need to be able to find the tool that they need, and they need to be able to sit down with their colleagues and explain clearly what the tool shows. Based on those three findings, we went back and we revised the toolkit. Three changes addressed each of those three themes. The first was that we added a documentation and coding tool that helps hospitals improve the fluidity of their PSI rates.

The second thing was that we revised our prioritization matrix to make it flexible, to reflect the fact that hospitals may have different weights of their criteria and may factor in many different criteria and priority setting. So it's a flexible framework where hospitals can pick and choose how to weight their criteria and which criteria to include. And the third thing was we went back and simplified the tools and instructions to increase the usability of the toolkit and to help users find what they need.

So we all, I think, owe a big thanks to these six evaluation hospitals. We learned from their experiences and refined the toolkit. All these changes are reflected in the version that's now available on the AHRQ Web site. That concludes this part of the presentation. I'll hand it now to Lise.

Lise Rybowski: Thank you so much, Peter and Donna. I want to remind everybody that you can submit questions at any time. We only have one question right now that I think is actually for Joanna, and Joanna, the person asking the question wanted to know whether AHRQ is going to be doing anything with outpatient quality indicators.

Joanna Jiang: In terms of outpatient quality indicators, all those indicators were developed based on the Healthcare Cost and Utilization Project data, which are primarily inpatient data. But we do have data on emergency departments and ambulatory surgery. So in the long run, we may be developing indicators for those two areas and I think, for the ED, work is currently underway.

Lise Rybowski: Thank you so much, Joanna. I think now we will move on to Donna and Ellen Robinson, who were going to talk about the use of three specific tools. Wait, you know what, I do have one more question. What plans does AHRQ have for adopting the quality indicators to ICD-9/10? Joanna, is that something you want to answer?

Joanna Jiang: Yes, we have plans in place for doing that.

Lise Rybowski: Great. I'm going to go ahead and move on to Donna and Ellen Robinson. They are going to talk about the use of three specific sets of tools. I want to remind everybody to keep your questions coming, and we will continue to field them as we go along. Donna, I believe you wanted to start with a question for our audience?

Donna Farley: Yes, I certainly do. To give us some idea about where you are in your quality improvement activities and help to set the stage for what Ellen and I are going to be talking about, we have a question here on the screen, which is, "Which aspect of the quality improvement process has been most challenging for your hospital to do well?" And we have items under there: generating rates and trends for your performance measures and the credibility to the clinical staff, reaching agreement on which issues should be priorities for improvement, developing improvement plans that are effective and have buy-in, and finally, carrying out the planned improvement successfully.

If you could just choose one of those—and people are starting to actually vote on this now—we're going to give you a little bit of time for these responses to come in. We know that every step in this process has some challenges involved with it. Anyone who has been involved in quality improvement has encountered this. And each one of them is quite unique. We have attempted to establish tools that, in fact, help with some of those issues, and the results here will actually give Ellen and me an idea of where your emphasis is in terms of where you may be looking for some support to strengthen your own activities.

It looks like the voting is starting to slow down a little bit, so I'm going to broadcast these results so that you can take a look at it yourselves. It looks like we have most of the votes saying that the developing of improvement plans and carrying out the actual planned improvements are the most challenging to you. These are both areas that can be very complex, and it's interesting to see this, though we do see a fairly even spread across these four categories.

Now I will go on to the presentation and talk a little bit through the specific sets of tools now that we've given you a little bit of background. It's somewhat frustrating to have an hour and a half to talk about a toolkit that has more than 30 tools in it, so what we're doing is trying very hard to focus on those that we feel are most important to highlight. So we are going to highlight how they apply at different steps of the process and hopefully really offer an opportunity for audience questions and discussion. We will be stopping at each of three points during the presentation so that you'll have a chance to ask questions of Ellen or myself.

The way we formatted this is that the first section will be tools that work with data for the PSIs and the IQIs. The second one will be tools related to diagnosing issues and developing strategies for improvement. And the third, tools that can be used for implementing your improvement plans. Those last two particularly tie quite closely to some of the priorities that you all have just identified in the polling question.

Here's a reminder of the toolkit, which I presented just a few minutes ago. The sections that we're going to look at first relate to working with the PSIs and IQIs. The tools to do that are in the first two sections of the toolkit, readiness to change and applying the QIs to the hospital data. The specific tools that we can identify and we encourage you to map this back to what you have found on the AHRQ Web site in our road map is, first of all, in the A tools fact sheets on the PSIs and IQIs, a few basic descriptive backgrounds on each of these sets of tools and list of the indicators, and then a PowerPoint template for presenting information on the Quality Indicators for your board and staff.

The rest of the tools are all in the B section. These are the ones with the heavy lifting for calculating your rates on the PSIs and the IQIs as quality indicators. The first tool gives you instructions on that and some highlights of what kinds of rates are available from the software that AHRQ offers, and we talk in the second one about some of the examples of the output from the AHRQ software. There's software available both in SAS and in a Windows-based program that hospitals can use to calculate their rates, and some hospitals actually are able to get this information from organizations that they are part of. The next set is, again, for presentations, spreadsheets and PowerPoints to present your rates within the hospital.

The next two tools, B4 and B5, are new ones that came about directly as a result of the field test. As Peter indicated, there was a need for more guidance on documentation and coding, which we've developed, and on assessing hospital rates using trends and benchmarks, which we've highlighted in a separate tool. So at this point, Ellen, I'm going to turn it over to you to talk a little bit about your experience in working with the analysis of your indicators at your facility.

Ellen Robinson: Thank you, Donna, and good afternoon, everyone. As they have mentioned, I work out in Seattle, Washington, for Harborview Medical Center, and we're part of the University of Washington medicine group. I had been with quality improvement for about 3 months when I was given this project assignment. I am a physical therapist by background and had worked many years here in our acute care and ICU, but I really did not know much about quality improvement. So my boss said, "Can you figure out what those AHRQ Quality Indicators are and figure out how we can use those?" I'm like, "Okay, great." I really wish, in 2009 when I started this, I had had a toolkit because it really would have made this whole process a lot easier.

But I'm going to go over now how I have enhanced our project using these different tools and what I've found most useful about them. Our project goals for the Quality Indicators were really twofold. We wanted to be able to use them internally to help identify cases where we might have a possible preventable harm. We also wanted to use a Eliza (phonetic) software to help standardize the way cases were referred out across all our various teams in the hospital. We have a pretty robust M and M process between our various services who then report back quarterly among their peers, but everybody's reporting issues were a little different, and the case referral numbers were a little different. So we wanted to standardize that process. We also wanted to understand the data that was being publicly reported. We are part of the UHC, and they use a lot of these Quality Indicators to measure your quality and performance at your hospital. So we wanted to understand what those rates really meant and validate those.

We started with our readiness for change, and really our medical director was the one who gave the edict that we were going to run this project. He was previously the director of our quality improvement department, so we have very strong leadership support and directive for the project. In other words, the board was on board that this was a priority and we should take whatever resources we needed in QI to ensure that this happened. The challenges we identified is that the information about quality and patient safety was not always being disseminated very clearly to staff at all levels of the organization. So we knew that as part of this project, we really wanted to raise the visibility of quality and patient safety to staff at the nursing level, bedside, physician level, all the way up through administration.

As far as applying your own data to the software, I think that was one of the biggest challenges. The input data has to be in a specific format, and every hospital's billing system is going to be a little different. So we had analyst assistance to be able to translate the output of the billing data to be able to run it through the AHRQ software. Once you get your cases out of the software, we spent a lot of time validating these rates. Again, we had UHC as our external source to ensure that our software was running correctly and we were really being able to catch all the different cases.

I think some of the challenges, again, Donna mentioned you can run it in either SAS or Windows, but there's also a lot of versions that change. So you may get the whole process up and running and you get a format change, and now suddenly what you've been importing through the software may need a little bit of tweaking. So I think that was challenging. Also all of the cases that were flagged as possible Patient Safety Indicators were cases that I then reviewed to make sure I understood what the metric was actually trying to measure and, again, whether or not that was a true medical event that we were to have concerns about. So those tools where it tells you exactly which ICD-9 codes make it in or out of the metric were really, really helpful.

Once I went through that process of validation and I felt, "I feel really comfortable that I'm able to recreate these rates on a monthly or quarterly basis," I took it on the road. I went and talked with the surgical council and our med exec board, various physician groups across the hospital, the board of directors for the hospital. I also went and worked with our coders and for those of you who have clinical documentation specialists, talking to them about what are these PSIs, why do we care about them, what's your current performance, and how do we stack up against the rest of the UHC? That's who we chose as our peer group. Then we went back to, how is QI going to review cases, and what are our expectations from the teams helping us with these reviews. Then the quality improvement overall project was to really identify possible opportunities for improvement and then be able to make some action plans around it.

As they mentioned in the road map, the tool B4 is probably one of the most useful tools if you're just starting out on a project like this. Again, it talks about each PSI and some common challenges that you might run into where you get more false positives than you might expect. And I think the one thing to recognize with this project is that, yes, there are limitations to the administrative data—and again, that was a lot of the push back that I think you get from clinical staff—but there's also some real potential. And I think we have to recognize that it may not find everything for us or find every event, but it does have an opportunity to maybe make some change for patients. And again, partnering with your clinical documentation programs or your coding departments is really critical to the success of your project because if you don't have someone on their side helping you understand the metrics and understand why they are putting out codes in the way that they are, it's a little bit harder to make change on that side. I think we have time for questions.

Lise Rybowski: Yes, we do. Thanks so much, Ellen. I have two questions here that I think are probably for Donna, although Ellen, if you want to chime in as well, you are welcome to. The first question, Donna, is whether the tools are independent of one another or if there's a priority for learning one over the other.

Donna Farley: That's a very good question. They largely are independent of each other. In the tools in the B section that deals with calculating your rates for your Quality Indicators and working with presentation of those and trending and benchmarking, there is some order. I would suggest that the first one that you would really want to look at is the very first one, B1, which is the anchor. The very first thing in that tool is a presentation of the different types of rates that are available to be calculated with the AHRQ software, and I think it's very useful to understand those rates as a background for everything that you do so that you have confidence in what you are seeing in your numbers. The followup tools are supports to that and help you through the process of calculating your rates or working with the rates once you either calculate them or receive them from somebody else.

The documentation and coding tool does stand on its own. It really was developed to deal with a very clearly defined and distinct issue. The trending and benchmarking tool actually works hand in hand with the first tool, but you can get into that issue very readily and deal with it alone. I think that we will see that kind of tension again in the remainder of the tools when we get into the ones that are used for implementing improvements, that there is a chronology of steps that you should be taking in an improvement process, and those tools help you through that.

Ellen Robinson: And if I can just chime in too, Lise, I also think it may be based on where you are in the project. So you may know your rates and feel comfortable with them, but maybe you haven't especially done anything on the ground operationally to try to impact them. So I think you might pick and choose, depending on where you are in the evolution of your project.

Donna Farley: That's a good point, Ellen. And we recognized that as we developed the tools. We tried to make each one stand on its own so that they can be as responsive as possible to where any given hospital may be in its process.

Lise Rybowski: Thank you so much. We have several more questions that have been coming in. I want to remind you all to feel free to ask more questions. Whatever we don't get to now we can get to later. We have a question here whether—I think this is for you, Donna—whether you could expand on the use of the priority matrix and the criteria contained within it.

Donna Farley: We will be getting to that one in the very next discussion, so let's hold that one for the next round because there's going to be a pretty strong focus on that. It's an important tool in the prioritization process.

Lise Rybowski: Okay. I'm going to move on then to the next question. We have one that asks whether you'll use AHRQ certified patient safety organizations to collect data analysis and system recommendations. Maybe this is for Ellen.

Ellen Robinson: I'm not sure if I understand the question. Can you repeat it one more time, Lise? I'm sorry.

Lise Rybowski: Sure. The question is, will you use AHRQ certified patient safety organizations to collect data analysis and system recommendations?

Ellen Robinson: At this point we've used the software to address our own patients and we benchmark against UHC, so I'm not sure if they mean if AHRQ is going to be doing some further collection of who's using them or not.

Lise Rybowski: Joanna, is this something you would want to comment on? Joanna, you might be on mute.

Joanna Jiang: Sorry. I was on mute. Would you mind repeating the question?

Lise Rybowski: Sure. The question was whether you will use AHRQ certified patient safety organizations to collect data analysis and system recommendations.

Joanna Jiang: I think I will defer that question to my colleague in another center who is handling the PSO.

Lise Rybowski: Okay. So maybe this is a question that this person should send into AHRQ? Okay. My next question is, do you expect that the toolkit and AHRQ software will have the same results? Donna, is that something you can comment on?

Donna Farley: Would you repeat that one again, Lise?

Lise Rybowski: Sure. Do you expect that this toolkit and AHRQ software will have the same result?

Donna Farley: Okay, I just wanted to be sure. Yes, indeed, because the information that we provide in the tools for calculating your rates, in fact, points you right to the AHRQ software so that we are assuming that the AHRQ software will be used to calculate the rates for the IQIs and the PSIs. What we provide in here is additional information and instruction on how best to use the software and how to manage some of the issues that come up when you are using it. There's also some description of alternatives that hospitals might have if they are indeed receiving their rates from some other organization, such as UHC or other hospital membership organizations. But we do not provide any different materials than what AHRQ has already provided. This is a resource to help reinforce hospitals' use of it and to support that process.

Lise Rybowski: So on that note, I also have a question here asking whether the AHRQ software is available to those of us who do not have it. Joanna, can you comment on that?

Joanna Jiang: Yes, it's available to everybody. You can download for free from the AHRQ Quality Indicators Web site.

Ellen Robinson: And yes, just to clarify, this is Ellen again. We were able to download it very easily, and again, I think that speaks to a little earlier where I mentioned the hardest part was being able to get your own data in the right format to run it through. I use it on a monthly basis when I run my cases, and I think all of these tools in the toolkit just support you on how to use that database and that software. So they are very related.

Donna Farley: There's another aspect to this that we had not mentioned explicitly, but I know that we have found it at RAND on any number of occasions, that when you start using the data and the calculations of your rates over time, to look at your trends in the rates, there are changes made in the AHRQ software from year to year that sometimes make it difficult to get comparable rates for all years. That requires an adjustment so that you choose one year and work with that computation and that software, to be sure that you have consistency over time. But all of it, again, I want to reinforce, can and should be done with the AHRQ software, because it's been carefully developed in order to give you rates that are as accurate as possible.

The caveat on that gets back to the documentation and coding process, because one of the things that we found as we were doing our preparation for development of that tool is that often what looked to be invalid rates for the PSIs turned out to be either a documentation or a coding issue. One of the things that you really have to look at carefully is to be sure that you have made the conversion from your clinical data into your administrative data in a way that's been coded effectively so that you are, in fact, looking at accurate rates and therefore rates that are credible to your clinical staff.

Lise Rybowski: Thank you, Donna. Ellen, we have a couple more questions. I'm not sure if you're going to be getting to this in your next slide, so we can hold on this one if you think so. But we have a question asking whether you give any specific attribution toward physicians, departments, et cetera, as you record.

Ellen Robinson: Yes, we do, and I've got a slide. I don't think I have the specific slide, but yes, we have kind of come to an agreement over time of which departments we are going to refer cases to through this M and M process, and then that would credit into that physician group.

Lise Rybowski: Thank you. Donna or Peter, could you explain a bit more about how you use UHC to help you validate data findings?

Donna Farley: To validate data findings? We didn't do any validation of the data findings as part of our evaluation. The hospitals that were participating with us in the field test and working with UHC through their collaborative did a lot of their own data validation and, in fact, I think UHC provided some feedback to them in that process. But Peter, I think you might be able to speak to this a little bit because you did observe in the evaluation that several of the hospitals spent a good part of the year that they were involved in the field test cleaning their data, basically.

Peter Hussey: Right. As part of this process, we didn't do anything that I think would be too unusual for quality improvement in terms of them checking their rates. They did have UHC available to them as part of the field test, and UHC performed a number of services. One thing is, most of the hospitals that participated in the field test were members of UHC and so were able to compare any rates that they collected. They calculated themselves, against the rates that were provided to them by UHC. And then the other piece is that UHC also provided support to the project in moving through the steps with the toolkit, so that type of support may not be available to non-UHC members who are using the toolkit going forward.

Ellen Robinson: If I can chime in on my use, we get the UHC summaries, but the data can be 6 weeks old, 3 months old, and for us we really wanted it to be more rapid. It's difficult to refer a case out that happened last November. No one remembers the case. None of the residents are here. So we were really trying to move to as real time as possible model. And so as we ran these things month to month as we got the more historical UHC data, we checked back against that saying, "Okay, we're on task and everything is running really well in the software."

Lise Rybowski: Thanks so much. I have a question here I think for Peter. How successful were the pilot health care organizations at improving the Quality Indicators?

Peter Hussey: It's a good question. Over the time period that we tracked, most of the hospitals were not able to get to the stage of implementation where they were to evaluate their improvement. I would say some of the hospitals we stopped maybe right before we got to that stage, unfortunately. The anecdotal comments that we heard were that they were seeing some trends in the right direction. In some cases it was fairly drastic results that were reported back to me in terms of reducing event rates. But I think a lot of the delay was related to some of the issues that have come up. So hospitals, yes, saw improvement in validating their data and getting to the point where they were confident they could move ahead with improving real quality, but over the time period that we tracked, which was through the earlier part of 2011, maybe through mid to late 2011, the hospitals had not yet gotten to the point where they were moving to that stage of acting on preliminary results in their rates.

Lise Rybowski: Thanks, Peter. I have a question here asking whether hospitals can get benchmarks using the toolkit. Donna?

Donna Farley: That's actually a very good question, and unfortunately we have come to the conclusion that we were not able to provide the benchmarks, in large part because many of the organizations that provide benchmark data for comparisons provide it in the form of proprietary data, and everything that we do with this toolkit is in the public sector. So we have to limit ourselves to what is available in the public sector. What we do in the trending and benchmarking tool is offer a number of suggestions to hospitals on sources that they might reach out to to obtain benchmark information that they can use to compare their performance to others.

We tried. We really wanted to provide at least some benchmarking. But we came to the conclusion that it would not be possible. There is some benchmarking that's available through the output from the AHRQ data, but part of the problem that you have in many of these calculations and the data that you're using for comparisons, national comparisons in particular, is the timeliness of the data. Hospitals want to be able to get data that's as current as possible, so we really encourage people to look to sources that are available to them through their State associations or other organizations with which they are affiliated.

Lise Rybowski: Thank you so much, Donna. We do have some more questions, but we are going to move on. So I think if you and Ellen could go on to your next segment, we'll take more questions after that.

Donna Farley: Surely. Thanks, Lise. The second set of tools and items that we are addressing is those related to diagnosing issues and developing strategies. And we're back to the structure of the toolkit again, and I've highlighted on this slide that there are tools in the readiness to change section as well as the sections on identifying priorities, the implementation methods, and the return on investment analysis, all of which can be used in the process of diagnosing what issues you have in your hospital based on what you've seen and the rates you've calculated and then developing strategies to deal with them.

And [this slide shows] the specific tools. The first tool is actually something that we encourage organizations to do even before they start moving down the road of a quality improvement process, and that's assessing how ready you are as an organization to change, to make those changes. There's a set of questions in here that deal with readiness for quality improvement in general and another set that deals with readiness to work with the QIs. That's part of the assessment process, but we suggest it as a very early step to be taken. Then the prioritization matrix is a big part of setting your priorities, and we provide in the toolkit both the matrix and an example of a completed matrix to help people understand how it can be used.

There are a number of tools in the section on implementation steps, which run from D1 toD5, which are an overview of improvement methods, developing a project charter, as well as some examples of effective improvements and best practices for improvements for the PSIs and a gap analysis that can be done to assess what your situation is and what you need to do to close the gap between your actual practice and what you would like to see happen. Return-on-investment analysis is an analysis that's fairly sophisticated but in fact can be used both in the planning phase and afterward to see how you actually did, so we encourage its use in both places, which is why it's listed here.

I'm going to focus on the prioritization matrix and return-on-investment analysis as what we think are a couple of the key tools and have some issues involved that we feel are important for you to be aware of. The prioritization matrix, we found from feedback from the hospitals in the field test, turned out to be an important decision support tool. We got a lot of feedback on the tool. All of them used it as part of the implementation process, and they all had a variety of views on how it should be used and what factors should be included in it. What we have in the tool now as factors that can influence choice include benchmarks, cost, strategic alignment, regulation, and barriers to implementation. Yet we encourage hospitals to tailor this, to adapt this matrix to the factors that are important to you. There may be some in this tool that are not particularly important in your priority setting, and there may be some that are not in there that you feel need to be included. Those changes can be made. We encourage it to be used as a working, live tool.

The second tool that I'm highlighting is the role of the return-on-investment analysis, as I mentioned just a few minutes ago. It's a useful tool both for planning and for postimplementation. In the planning phase, you actually can use it to estimate what the potential effects of changes that you're making might be on hospital finances and your costs and return on those costs. And postimplementation, you can actually use your own data from the implemented program or the implemented changes that you've made to estimate the actual effects, what kind of effect it had on the hospital finances. This is only part of the effects analysis. Obviously, you want to look at the effects on the quality and safety measures that you're working with, but hospitals also need to know what the financial consequences are, and this tool can be used for doing that. The tool actually provides instructions for performing an ROI [Return on Investment] and gives an example for doing that. Ellen, I will turn it over to you to talk through how you've worked with some of these tools in your process.

Ellen Robinson: Thanks, Donna. So this is a little snapshot of what the prioritization matrix looks like. It's an Excel spreadsheet, and your calculations are built in, so it's really nice to use. If you look at the blue, you put in your own rate, pick a timeframe, and then pick who you want to compare yourself against. One of the things that does come out of the AHRQ software is a target rate. A lot of that is based on the HCUP [Healthcare Cost & Utilization Project] data that kind of is more population based, but you do get a target, what you might be shooting for. Again, we were trying to be above the UHC median, so we included that rate. Those are all in the blue area. And then the green area, you put in what's your volume of these events, and there was a cost assigned to these events at the beginning, during the pilot. So those costs were kind of set, and I'm not sure right now, Donna, if those are still in there as a cost.

Donna Farley: They are not in there now. We took them out because there were so many versions of what should be in the cost that we felt it was better to keep it empty.

Ellen Robinson: Right. And then it will calculate the annual cost to your organization. In the purple section, from zero to ten, you want to look, what's the cost of implementation? Is there less cost to implement than the cost of the event? Does this align with goals we're already working on? Is it something that is either a regulatory mandate or it has the potential to negatively impact how the public perceives your hospital? How high of a risk is that? And again, that's from zero to ten. You get a total score there, and then across in the orange and yellow section is, do you really have the support that you need to roll this program forward, so executive support, staff support? Do you have the time and effort to be doing that? You just tell it yes, no, and then you pick out the things that seem the most impactful in your organization.

When we originally started this project with the pilot, we said we had three. But when we look down at the DVT [deep vein thrombosis], if you look at the annual volume of event, at that time it was 86 VTE [venous thromboembolism] events for the year. We thought, that's really going to be the biggest thing for our buck, and there's also a lot of interest in that in our hospital, so that's what we were going to pick. So I think it would be fine, too, as you work through these to select one or two things to start with, see how those projects roll out, and then add things to them as you learn from each one.

As far as the return on investment tool, I think that 5 years ago, if you said quality, people really didn't know what you were saying about hospital quality. Now it's all over the newspapers, and it's quite the buzz word. And I think it seems to me that efficiency is now kind of one of the new buzz words, and we're really still partnering with our decision support and our finance team to identify a real meaningful metric. As Donna mentioned, PSI event costs really vary in the literature, so it's hard to have a target. So we really haven't got our heads around this return on investment yet, but it is getting a lot of attention through our financial support and financial leadership. So that's something we're continuing to work on.

Donna Farley: Lise, do we have any questions on this round?

Lise Rybowski: Yes, we do. We still have that question about the priority matrix. Is that something you want to speak to, or do you want to hold that?

Donna Farley: Sure. Why don't we address it now?

Lise Rybowski: The question is whether you could expand on the use of the priority matrix and the criteria contained within it.

Donna Farley: In a way I think I just answered it when I gave the talk to that slide, but let me visit it again because this is really a key point. We do see this particular tool as a flexible working tool and that hospitals should indeed add factors to the Excel sheet that they feel are important. There may be some factors in there that they don't think are important, that they could eliminate from consideration. What we have on the rows in that prioritization matrix is a row for each of the Quality Indicators so that you actually are developing the measures on all of the ones that you are considering as potential priorities for quality improvement. You may even want to put other issues in rows, insert them in there, because you inevitably are going to be looking at priorities within the Quality Indicators and also between one or more of the Quality Indicators and other quality issues that you may have in the hospital that your organization feels are important.

We recognize that your choices of work with these specific indicators is being done in the larger context. So we encourage flexibility in the use of this tool. Don't be locked into it. It's provided as a working template for you to adapt and apply in the way that's most effective for your particular situation.

Ellen Robinson: And if I can add in about the use of it as well, while I compiled the rate data and the number of events, we then sat down with a group, three or four leadership in quality and administrative leadership, and said, "What do you think the risks are? What do you think the barriers are?" Because I think it really was a useful brainstorming session that we had to fill it in, because you don't always know the ins and outs of what else is going on. So I think it's good when you are filling it to have leadership representation helping you go through it to, I think, also get buy-in on what is going to be your priority.

Donna Farley: Really important point, I think, and it actually reminds me to step back one step to the tool that we identified in the A section, which is assessing your readiness to make improvements. That also is a tool where you can—and we recommend it—have a number of people in your organization actually fill in that questionnaire, that set of questions, and then sit down as a group and compare your responses and talk about it. It's an opportunity for dialogue among your leadership team, and I think that's exactly what Ellen is saying here. Use the prioritization tool as a vehicle for discussion and joint, shared decision making.

Lise Rybowski: Thank you. We have a question here whether you would recommend the same process for PSIs for pediatric hospitals.

Ellen Robinson: I can speak to that. The AHRQ software also runs the PDIs out of it, and we had this adult project really up and running, and our pediatric chief said, "What about these PDIs that you see reports on? How can we get those to review?" So we actually have implemented it. We don't have a full pediatric population. It's one of our subsets for trauma. But we have everything identical for pediatric, so I think you definitely could implement it.

Lise Rybowski: We also have a question about whether the tools would be helpful for hospitals that have small volume.

Donna Farley: We actually hope that they will be particularly helpful for hospitals with small volume because many of these hospitals don't necessarily have the resources available to them to reach out to external consultants, and they may not have access to membership associations. A caveat on that, however, is that with small volume, you have to be very careful in how you interpret the rates that you calculate for your indicators because you have a small denominator that you are working with and your rates are going to be very noisy, as they say in the statistical world. They are going to fluctuate more from time to time than the rates for a hospital with a larger patient volume. Having said that, we really hope that these tools will, in fact, empower some of the smaller hospitals to do some quality improvement work that they might not otherwise be able to do because of not having the kind of staffing depth and internal resources to work with.

Lise Rybowski: Thank you so much. I think it's time for us to move on to your third segment, and then we will have time for questions at the end both for the two of you as well as for our other speakers. So Donna, you can take it from here.

Donna Farley: We move on to the last one, which is the actual step of tools involved in implementing your improvement plans. You can see from this slide that these include all of the tools—or the rest of the tools, I should say—in Section D, the implementation methods and the tools for monitoring progress and sustainability and, again, the return-on-investment analysis. These are the tools that we're specifically identifying here. The remaining tools in the implementation method section are those for implementation planning, and the result of this is the establishment of a written action plan that you would then work with in actually carrying out the improvement actions that you have defined, the implementation measurement tool to help you assess what kind of progress you're making in the implementation process, and finally, a project evaluation and debriefing at the end of your improvement initiative.

The monitoring progress for sustainable improvement kicks in after you have completed, if you will, the acute phase, the short-term intervention phase of your improvement initiative, and you want to be sure that any gains that you've achieved, any improvements you achieved, are sustained, that you continue to maintain those improved practices. In order to do that, you actually have to put some mechanisms in place to monitor your rates and monitor your actions in order to flag if you see a loss in momentum and pick that up. The return-on-investment analysis at this point, as I discussed earlier, would be used on your actual data. How did you do, what did those changes cost you, and what did they reap in terms of return financially?

I'm going to focus right here, before I turn the stage back to Ellen, on the monitoring tool, in part because, as Peter mentioned earlier in response to one of the questions, few of the hospitals in our field test actually got to the point, even after a year, where they had fully completed the implementation of their improvement plan and were, in fact, moving into a monitoring mode, a sustainability mode. This last phase tends to miss the attention of many organizations and many people because it comes at the end of a process and tends to get underestimated. I want to emphasize here how really important it is to be sure that you can sustain the improvements that you've made.

The guidance that we have in the toolkit includes these five items that are bulleted here: The need to establish a limited set of effective measures; schedule for reporting on those measures and a report format to communicate regularly and clearly to the players within your organization, all the folks that are important to this process, how you are doing and how well you're maintaining the improvements; procedures to act on the problems found; and finally, a periodic assessment using this information to be sure that you've held the line and that nothing major has slipped to cause a loss in that momentum. Ellen, let's turn it over to you again.

Ellen Robinson: Thanks, Donna. I'm going to have you move my slides for me. At this point, I just want to talk a little bit about the D tools, and that's what I'm kind of calling a project management toolkit. So for those of you who are clinicians like I am, I didn't really have a lot of experience with project management. So the tools to help you make a charter for a group and do your gap analysis were very, very useful. I think the poll at the beginning of the webinar also mentioned that sometimes it's hard—it's easy to kind of get a plan going—but it's hard to keep it going and make sure that things get followed through on. And a lot of those tools can really help with those types of problems.

The best practices tools, as well, compile a lot of the clinical literature that you would need when you're going to your groups of clinicians and saying, "Are we doing best practice around line infections? Are we doing best practice around VTE prevention?" And those things are really, really useful as well. So they helped us as we developed different task forces around our selective PSI areas, because you may not have the same person on the task force for line infection that you do for iatrogenic pneumothorax or VTE. These tools really helped me to keep our teams focused and on track during the early stages of our implementation.

The next slide talks about my overview of how I review these cases. Every month I get a feed from our billing system, and I run it through that AHRQ software. That's where I sit down and I review my cases, block out some time, and I decide, this seems more like a coding or documentation issue, which I send out to my liaisons in documentation and coding. Those cases, we meet once a month then and say, "Yeah, you know, that was a wrong code," or "There's no exclusion code that really should be on there. Maybe it was a present on admission issue that wasn't clear." Then they will go through and update the coding. Or they may say, "You know what? That's a real event, and here's where we found it." Or I may have found that as a real event in my review. Those go out to our service chiefs for review, and then they are expected to bring back to us either yes, this was a problem, and here's what we're going to do about it, or no, we don't have any QI concerns.

Then in the top right- hand corner—I'm going to touch on this at the end—but there are just times when there really wasn't a clinical event and there is no coding issue, but the tool itself just may give you a false positive. Sometimes I just called that a flawed metric. Remember, we're doing the best we can utilizing the administrative data, and there's times it just doesn't meet every clinical scenario.

Here's an example of the database that I've developed to utilize the tracking of some of these cases. It's just built in Access, and essentially I import the numerator cases from the software into this tool. It allows me to know what is the tool, when do I find the event happening, what's the kind of situation around it that I have a question about, and then I will track who I send those out to. Then I track when they respond and what their outcomes were. This has allowed us to kind of trend and track over time the different number of events that we have. Again, we kind of put them in these buckets where maybe there's an opportunity here; no, this one was patient disease; this one was documentation; or the metric just doesn't meet our population.

Here's an example of one of the ways that we report this data on a monthly basis, and this is on our Harborview intranet. And what we've done here is just stack the raw number of events in the different areas that we're tracking over the last few fiscal years. And I think there was a good comment about the rates, and if the rate is three out of a thousand patients, okay, that makes sense. But again, if you're dealing with very small numbers and you start to say it's .5 per thousand, what is that? There's no half an iatrogenic pneumothorax. So we find just having the actual counts of numbers to be very helpful for the staff. I don't have the slide here, but our next drill down on this tool has it by service across the bottom, and then we have stacked bars that show which events they are reviewing. We report that every month, and that's utilized at a variety of different presentations throughout on quality and safety.

I wanted to summarize some of the lessons I learned going through this process, and the first is validate, validate, validate. Peter touched on it at the beginning, when you're starting to take this administrative data out to clinical presenters, it's really important that you start, I think, with real case examples. I think it was at my first trauma council when someone said, "Where are you getting these, from billing data? Well, of course they don't make any sense. They are not real." And you kind of lose your audience. So now when I present I bring a story of a patient where we actually did find an issue and what are some things that we can do to improve on them.

I think having a good knowledge of the different specifications and how those PSIs may or may not fit in with your population is important, backing from your leadership for the importance of the project, and again, a coding lead liaison is really critical. You don't want to be sending cases back to individual coders. You need someone who is going to help you understand the measures and then help disseminate information to all of the coders.

This slide just gives you a snapshot of what I found. What was interesting as I was compiling this for 2011 is, when I went back to 2010 and then the half year of 2009, we really were seeing almost the exact same breakdown for these PSIs. These are the different PSIs we track on a monthly basis. We track seven of the PSIs. We don't do any OB care at Harborview, so we don't have to track those two. But of the cases we sent out, 45 percent actually did happen, but we didn't have any quality concerns. We had about 18 percent of the cases where we said, "Yeah, maybe there's an opportunity here in our system to improve this." About a quarter of them were related to either documentation or coding, and 12 percent fell into that, quote, flawed metric. An example of that, PSI 9 is for postop hemorrhage or hematoma, but the ICD-9 diagnosis code that helps you flag that is really a hemorrhage or hematoma complicating a procedure. So we were having patients with a ruptured triple A, for example, coming in and having a procedure. And yes, they were bleeding during the procedure, but they never had another procedure. So they never really had a postop hemorrhage.

So again, I learned that with my coders, they understand the details of what these codes are really trying to capture. Some of those just don't really fit. For PSI 11, these are just for elective cases who have respiratory failure, and the measure is looking to see if the patient was either intubated or on a ventilator X amount of days after their original procedure. We do a lot of planned—and these are only elective cases in the denominator—we do a lot of planned spine surgeries here, and they are two staged. So after the first stage, the patient may be in the hospital for a day or two recovering until they get their second stage, and maybe after that second stage they are on the vent for a few hours after the surgery. That may flag that as, boy, your patient was on a vent 5 days after their original surgery. They must have had a problem. But for those, again, they just don't always fit some of these more difficult patient populations.

I think the other thing we learned is that the AHRQ software is really one tool that hospitals can use for the identification of improvement opportunities, and I think as health care information technology has since become more sophisticated, we as hospitals are going to have more and more data. So we really wanted to challenge ourselves, how can we be creative and identify these cases in our actual clinical systems? We should know if someone has a PE regardless of whether it was coded. So we wanted to figure out, how do we find these false negatives? I have to admit, we started by saying, "We definitely think there must be just more and more false positives. We're certainly catching everyone." But what we did was went back to our radiology and vascular systems and designed a tool that helped us find positive cases in real time out of those diagnostic systems. What we found is we actually had more VTE events than potentially were reported and picked up by the patient safety indicator software.

So, for a year of events, 70 percent that we found in our gold standard clinical systems were also flagged by AHRQ PSI 12, but 30 percent were not. They were flagged only by a diagnostic system. And there were a few reasons for that. Some of them just were never identified in the administrative data. They either weren't coded, or they weren't in the 24 top diagnoses. The software only allows you to submit 24 diagnoses to run through, so if your DVT is identified in number 25 or 26, you're not going to pick that up. Or things were sometimes incorrectly coded as present on admission, yes, when they actually were not present on admission. Then the other population is the PSI 12 only looks at postoperative patients, so if your patient doesn't have an operative procedure, they are not even in the denominator to be looked for a hospital-acquired VTE. So some of our medical patients are quite ill and we want to make sure they are getting the right prophylaxis as well. So we really feel like without our internal clinical events search tool, these cases would have been missed QI opportunities, and now they are actually additional QI opportunities along with the administrative data to allow us to really try to address all of our patient concerns. I think we're on to questions again.

Lise Rybowski: Thank you so much. Ellen, we have a lot of questions for you, so I hope you're prepared. One question we have is whether you ever dealt with duplicated data being measured but having different results, and how did you handle those cases?

Ellen Robinson: Sure, I think I was going to comment earlier, when you're looking at the publicly reported data, you may say, "That's totally not what I'm seeing. I have this rate." And the first thing is to find out which version they are using compared to what you're using, because there are quite a few differences between who's in the inclusion and exclusion as your various versions, as well as whether you're using a SAS or the Windows. And the AHRQ help desk is also really, really helpful. They are very responsive, and if you do find problems, they can help you troubleshoot in that maybe something you are running through is missing a POA indicator or something like that. I think what I do is, since I run this stuff every month, we call that preliminary data. And then I use the UHC data that comes out a little bit later as my kind of final-final. And so that's one way I've kind of tried to explain that.

Lise Rybowski: Another question for you, do qualifying codes that are present on admission for PSI 4 cause that patient to be included?

Ellen Robinson: That is a good question. I've been still wrapping my head around PSI 4, I have to say, and I've been going back and forth a lot with the help desk. So I would again recommend that you utilize their expertise. It appears that in some cases, yes, things that are even present on admission are being flagged, and it may have something to do with the way that your input file is going through, at least the explanation I've received thus far. We do all mortality review through a separate process here, and so we haven't been using those other than validating sometimes. We send cases out all the time for clinical—not all the time—but for clinical reasons, and we then examine, "This is a PSI 4. Maybe we should look at that again, kind of after the fact." So I would suggest the help desk too for that one.

Lise Rybowski: Great. I'll give you a break for a second. Joanna, we have some questions here about whether or not AHRQ would support organizations that want to work with facilities in a collaborative way either to test the use of this toolkit or to work on it collaboratively. Does AHRQ have any mechanisms for helping organizations who want to do that?

Joanna Jiang: I think this is a great idea. I think there's definitely room for sort of testing and refining this tool, particularly to more multiple settings, hospitals with different characteristics, structural characteristics, and also community characteristics. I think this would be a good idea, and we can explore on that and see whether we would be able to have funding for the further refinement and testing of this toolkit.

Lise Rybowski: Donna, I think this is a question for you going back to some of your earlier comments on the ROI. The question is, on the investment estimation of cost, how did you view the division of lumped up cost, for example, medications? Or maybe that's a question for Ellen.

Donna Farley: How did you view the division of lumped up cost? I don't think, to be perfectly honest, we didn't get into that level of detail in the instructions that were in the ROI tool. These are the kinds of decisions that each hospital has to make as they are making decisions on what costs they are dealing with and at what level. And I think it's going to be specific to each hospital. So I have to say that, at this stage of the game, that tool is silent on that level of detail.

Lise Rybowski: Ellen, I'm back to some questions for you. First one is, if you're updating coding, do you also rebill? And if you rebill, do you rebill all or only those with a DRG change?

Ellen Robinson: That's a really good question. We struggled with that in the beginning, and I think at this point the philosophy is that we rebill, because we really feel like now that this data is continuing to go outside the hospital, that if we don't, you're going to have, again, that disparate reporting issue out in the community. So while we are rebilling, we are also trying to really implement strategies so we don't have to rebill, that the coders really understand the differences between these—the nuances of some of these codes—and are trying to query the physicians up front to clarify them before the bill ever goes out. So I think that's a really good question, and it's been at a level higher than me to kind of make those decisions.

Lise Rybowski: Another possibly tricky question for you is whether you are willing to share the Access database shell, without the data of course.

Ellen Robinson: Yeah, I believe we were actually sharing that with Denver Health. Some of that has to do with how do we get it to you, but certainly someone can contact me afterward, and I would be happy to talk through it.

Lise Rybowski: Joanna, I think this is a question for you. We have a question about whether the free software will be available in a Mac operating system format.

Joanna Jiang: I think currently we have that in SAS and in a Windows® version. I assume it should be able to work in a Mac system, but we can check on that.

Lise Rybowski: I think this is also for you, Joanna. We had a question whether and when toolkits will be available for the PQI and the PDI measure set.

Joanna Jiang: For the PQI, I believe we are developing another toolkit. But since PQI is more about, not outpatient care, but more about preventable hospitalization, it's kind of a window to look into the access and quality of outpatient care. So the toolkit may address different issues than this toolkit. For the PDI, we can keep that in mind. Maybe in the near future we can expand this toolkit to include PDI.

Lise Rybowski: Great, thank you.

Donna Farley: Lise, before you move on, a comment on that. Several of the tools in this toolkit are very specific to the PSIs and the IQIs. Others, however, are fairly generic and would, in fact, be adaptable and applicable to other performance measures, other AHRQ indicators, or even other measures because they are really related more to the quality improvement process than to the performance measures themselves. So I think Joanna's point on expanding this toolkit to other indicators really reflects that. Even if you're working on some other performance measures, there may be some tools in this toolkit that are useful to you, even with the different measures.

Lise Rybowski: Thank you, Donna, for clarifying that. I have another question. What is the back-end database of the AHRQ toolkit?

Donna Farley: I think I can answer that one, Joanna, because there isn't one. The back-end database, as it would be anywhere, would be for the indicators themselves. The toolkit doesn't get into measuring anything that uses an actual database. All of these tools are freestanding, if you will, and they are available for download and use by the hospitals. The database that would be used would be the data that a particular hospital has that they would actually apply the AHRQ software to.

Lise Rybowski: Thank you, Donna. I have another question for you about whether you've had any experience with the IQIs in addition to the PSIs.

Donna Farley: Whether we had a chance, an experience with the IQIs? We actually did not, simply because in the field test, all of the hospitals chose to use the PSIs as the performance target for the interventions that they started to work on. That reflected, I think as much as anything else, some of the priorities that are being placed on the hospitals from external sources, particularly Medicare, which is beginning to work now with the PSIs. I would imagine that many of the same things that we found in the field test for those that were working with the PSIs would be applicable to the IQIs. The issues of coding, the issues of how to calculate the rates, how to be sure you have accurate rates when you're doing trends analysis or benchmarking are probably very similar.

Lise Rybowski: Thank you. I think we are now out of time. I'm sorry we weren't able to get to all of your questions, but we did actually get to most of them, so I'm pleased about that. On behalf of AHRQ and our speakers today, I want to thank everyone for participating in today's webinar. We'll be letting you know when the recordings are available and, as I mentioned in the beginning, when we'll be having a second webinar that will repeat the content on this event. Thank you all. Have a great afternoon, and good-bye.

Current as of March 2012
Internet Citation: Transcript: Webinar on AHRQ Quality Indicators Toolkit for Hospitals: How To Improve Performance on the AHRQ Inpatient Quality and Patient Safety Indicators. March 2012. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/professionals/systems/hospital/qitoolkit/webinar0215/qitoolwebtrans.html