2008 State Snapshots (continued)

On July 9, 2009, Ernie Moy, Foster Gesten, and Keely Cofrin Allen presented a Web conference on the AHRQ 2008 State Snapshots Tool. This is the transcript of the event's presentation (Part 2 of 2).

July 9, 2009
1:00-2:30 PM ET

(Continued)

Margie Shofer: I think that is about our time for questions for now, but we will get to the rest of these questions, or try to get to as many of them as possible, in the next Q & A session. It's now time for us to move on to the next presentation, I'm going to turn this over to Foster Gesten.

Foster Gesten: Thank you, Margie, and good day to everyone. Probably after the last presentation, it should be clearer why and what the opportunities were for us in New York, and I would say two things. One is that seeing Ernie go through the demonstration of some of the new features, I think that it's, to use a clinical term, "way cool." Two, in terms of our process, when we first started looking at these, we had to go through the steps of denial and anger about what the data was. But marching through those steps we found it to be very useful in terms of our thinking about how to prioritize and that's what I'm going to talk about a little bit and how we used it in a specific reform effort in Medicaid.

Like other States and other folks, we do a lot of measurement and a lot of public reporting specifically related to health plans. We also do benchmarking to national benchmarks to the degree that they exist for HEDIS and other databases, but we really lacked and welcomed the idea of being able to compare and look at how we're doing as a State.

Our use of the State Snapshots this past year is within the context of fairly aggressive and comprehensive Medicaid reform over the past 2 years in which we've been looking at moving investments from the in-patient to the out-patient ambulatory care side. The reason will become clearer when you look at some of the data, but the history was that there was a tremendous imbalance and the consequences that one might expect from that imbalance.

Our broad objectives were to promote broad promotion of primary and preventive care focusing on those services and those programs that had high value. We've been using the Prevention Quality Indicators [PQIs] in other contexts, including other grants that we had, in which we were able to see where New York was in comparison to a few other participating States. We understood that we had some issues and some challenges relative particularly to preventable hospitalizations for folks with chronic disease and most especially, asthma and diabetes. So, the State Snapshots and the use of those for benchmarking came at a serendipitous time for us as we were looking at how we start to turn this around and how specifically we wanted to create support for a new benefit related to certified diabetes and asthma educators.

I deliberately decided to keep the 2007 data and not update this, as you saw the updated data from Ernie, because this was really the information we were looking at as we made our decisions. We looked at where we were in terms of all measures and saw that we were average, and this is based on 2007 going in the wrong direction. You can see where we are in terms of a regional comparison average and average is better than below, but I think nobody strives to be average and certainly we didn't.

This slide really shows the Dashboard in a different configuration than Ernie showed previously, but again, this helped us to think about the areas that we specifically wanted to focus on and that were problematic for us. Using this data, we clearly found support for focusing on areas related to chronic care and both hospital and ambulatory measures in particular.

The next slide shows it was no accident that we focused on asthma and diabetes given where we were in terms of performance in these two clinical areas, with respiratory diseases counting for more than asthma. There are a significant number of measures where we are below the national average in the area of asthma.

This is just another way of looking at it or getting underneath the hood about which are the weakest measures. Looking at this we clearly knew we were in the right direction in terms of focusing on hospitalizations and the potential for avoidable hospitalizations around diabetes and asthma and the data that Ernie showed goes along with this.

This is my one defensive slide to show that actually there is at least one area in New York in which we're doing well. Our challenge, as we thought about this, was that we had large health care investments in New York in public programs, but in health care in general we had average to weak results. It corroborated other national reports, such as those from The Commonwealth Fund and elsewhere, that showed in many ways we're in the middle of the pack and it didn't jive with where our investments were.

We saw specific gaps in chronic care related to diabetes and asthma and particularly focused on preventable admissions, and although the data is not Medicaid specific—most of the data isn't specifically Medicaid—we still viewed this as relevant in looking at the entire population. We see that some of the newer reports do have more of a focus in looking at disparities or looking at low income. Despite some of those limitations we were able to really use this information to advance the argument to all of the folks that we had to convince, both in the executive and the budget and legislative arenas, to support the specific new benefit related to diabetes and asthma.

Again it was particularly important for us to gather this information and to summarize it. We knew at the time that the clinical evidence for some of these benefits is less than clear, but the State Snapshots and the benchmarking data allowed us to really focus and understand that this was clearly a priority area in which we needed to act. The benchmarking helped us focus in areas and prioritize the data that we had relative to disparities, which again pointed to the direction of a specific need for low income population Medicaid and the potential, even though at the time we did this there were not specific calculations. At that time, relative to costs for Medicaid, we knew from other data that reductions in preventable hospitalizations and improvements in care had major cost implications as well, so the combination of being able to use the data for that was extremely useful.

Just to wrap up, I'll mention some of the next steps for us. Part of this is really a conversation l with folks at AHRQ and a better understanding of what is behind the differences on measures. Somebody asked about whom to contact, and the reality I think in our State and probably for other States, is there is not one person but teams of people throughout State agencies that might be able to give some information. Trying to figure out a way to organize the information, how to understand why, in the examples that we saw, we see such big differences between nursing home care between Utah and New York, is a struggle. Is it contextual factors? Is it programmatic features? Are there benefit issues? I think many of us struggle with trying to understand what has gone into this when we see the differences. We try to understand the relationship between measures and the contextual factors. Which of these measures relate to one another, which are either measuring the same thing or different things and what is the relationship between some of the structural or process measures and some of the outcomes? In terms of prioritizing action, it is important to try to sort out which of the measures really had the greatest impact. I think some of the cost calculations are going to be very helpful.

I'm pleased to see in the newest iteration that there's much more integration of disparities data in the State Snapshots. I think it's a very important area and dimension to keep in mind as States start thinking about the important priority areas. As you can see, some of the data that was shown for New York relative to Black, White, and Hispanic-White differences presents some opportunities for us. Lastly, a question was posed by someone in the audience about being able to customize peer groups. Certainly we looked at national averages, we looked at regional averages, but there are certain States that we look at and think of as being similar peer groups in terms of their population mix or contextual factors that Ernie went into, based on size or type of benefits or programs. It would be helpful to be able to create that sort of a menu in which one could compare and have averages across whatever might be a self-defined peer group.

I think that's the last slide, and I'll be happy to answer questions at the end of the presentation. I'm going to turn the presentation over to my colleague from Utah, Keely Cofrin Allen, Director of the Office of Healthcare Statistics.

Keely Cofrin Allen: Hi, thanks. I am Keely Cofrin Allen and I am the Director of Health Care Statistics out here at the Utah Department of Health. Good morning or afternoon depending on where you are. I'm looking forward to sharing what Utah has been doing with all of you on the call today.

Just a little bit about Utah. We have about 2.5 million people in the population and 76 percent live along the big lake that you see on the map, on the Wassatch Front. We have a young population and the highest birth rate, so that affects our health measures. And we have about a 10 percent uninsured rate, Medicaid enrolled about 240,000, and median household income puts us at 13th overall. The good news is that we have a relatively young and healthy population, smoking and drinking are not as common here as in other States; however we do have our share of issues around obesity given our that obesity rates are increasing faster than the national average. That's something we need to keep an eye on as well.

A little bit about our office. We were brought into being in 1990 by the Health Data Authority Act. This act formed the Health Data Committee and our office is staff to that committee and I'll tell you a little bit more about that committee in a moment. The purpose is still very relevant today even though we were formed 18 years ago. We have a long history of doing quality health data analysis work.

Here is just a quick look at our Health Data Committee. It has 13 members who were appointed by the governor's office and who serve 6-year terms. We like to talk about the five P's in our office to conceptualize our customers. Providers and payers were the very first users of our data that purchase our data set for internal analysis, patients and purchasers have been the focus in more recent years, and policymakers are really our newest users as we navigate this very complex area of health care reform that is of interest both at the national level and at the State level. I'm going to be focusing on policymaking in this presentation today, but I just wanted you to see that we do have a variety of consumers who use our data.

As I said before, we were established in 1990 as staff to the Health Data Committee. We began collecting hospital in-patient data in 1993 and in 1996 began collecting data on ambulatory surgery centers and emergency departments. That year also saw us beginning to report HEDIS and CAHPS data. We're one of the earliest States to do many of these things and this year we'll become one of the first States to report CAHPS data PPOs [Preferred Provider Organizations] and our newest project is the all-payer database. We'll be creating a State-wide database of claims and enrollment data that allows us to look at cost of episode of care. We're in final testing and we're expecting to report on that beginning in the fall.

In 2007 we released the Challenges in Utah Health Care Report—you can see a screen shot of that on the slide—which was a policymakers' manual for the State of the State and health care. We see our job in this office as presenting the big picture to policymakers. They need to understand two important issues: where does our State stand relative to other States—both other States in the Nation and other States in the Mountain Region—and the direction that we're moving. Are we getting better on things or worse on things? Obviously the health care system is complex so we need to tell a story that includes multiple health settings. It could be about Medicaid or hospital systems, home health care which we do very well in, wellness and preventive care, which as you saw we don't do quite as well in, or primary care.

It's very important for States to track the health priorities in their particular State and tie their message to those, since that's what policymakers will be tracking. For us, those will be the priorities of the Department of Health Director, the governor's office, and the legislature. Ideally they will be on the same message but they might be slightly different and in any case, they are going to approach solutions to the problem differently. They play different roles so you have to tailor your message to fit the role and the approach that particular policymaker is going to be taking. And lastly you want to make policymakers aware of emerging issues. We see it as our job in this office to track things not only at State level but at a national level and this gives us an idea of things that might become State priorities down the road. When policymakers come to us asking about this new thing we can have something ready for them to understand the emerging issue.

We released the Challenges in Utah Health Care Report on the same day as AHRQ released their National Quality Report, which came out in June 2007. We did a joint release with AHRQ and it generated a lot of interest in Utah for national ranking. We presented where we are as well as where we are going on a variety of measures related to cost, quality, and access.

The report indicators that were included in that challenge were 16 summary indicators. Some were from State Snapshots, some were from other AHRQ tools such as HCUPnet (the Healthcare Cost and Utilization Project), and some were from a special analysis that was done with in-kind assistance from Anne Elixhauser from AHRQ.

Here is Utah's overall performance. This is the dashboard meter that we used in the report so you'll see us being in the "strong" category, which is where we were when that report came out. As you can see in the more recent demonstration that Ernie did, we've slipped a little bit. This is a really nice dashboard measure and something that policymakers like. It's easy to understand, it's clear, and it's memorable. It's very visual, it's got colors, and it ties into things that people already understand with the green to the red. This is something that when you have literally 30 seconds with the legislator, you can focus and give them something that they can use as a take away message.

We highlighted strengths and weaknesses. These are dashboard measures from last year, so low performing measures can be targeted for improvement, average measures should be watched for trends that may be going up and may be going down. Average is not always bad if it's an improvement from last year. Of course, as you saw earlier, the home health care measures are things that show what we're doing right. It's important to distinguish those things that you're doing right because of process—and I think home health care falls into that category—and those that look good perhaps because of demographics. Because of our young age, we tend to be lower on measures of cancer, for example, and that's not necessarily something that we are doing right. It may be something that has to do with our demographic.

Here is a table that we included in the challenges report. Notice that we've organized the information to help readers absorb it quickly: three categories of quality, access, and cost. Those are words policymakers should be familiar with and trends show measures that are steady in this particular table. We had three tables: one for improving measures, one for steady measures, and another for measures going down, and we included a page number so readers could quickly flip to the measures of greater interest.

The same year, The Commonwealth Fund released its Aiming Higher report, and you can see our rankings on this. There are two interesting things about Utah in this report. First, Utah was ranked much lower in this particular report than in any other report, including the State Snapshots, and second with very large variability in rankings across measures, Utah was the only State with rankings that were this disparate.

Here is our Department of Health Director, Dr. David Sundwall. He led an investigation and he came to me and asked the Office of Health Care Statistics staff to look at the data used for the measures both by The Commonwealth Fund and in the State Snapshots to really deconstruct them to give him and other people an idea of the methods used to collect the data and the indicators and how they were put together. He convened an informal meeting of the Utah Medical Ethics Committee in August of 2007. We met at the cabana at his condo and had a pot luck and just a chance to discuss the utility of measurements of health care quality, how best to incorporate what we learned from these sorts of rankings of both State Snapshots and other ranking reports that come out.

The summary from that meeting was that there is an important difference between outcome in how the patient fares in the setting and the process, how the care is delivered. If you measure at different levels you'll see different things, so it's important to look at both and see how they interact with each other. For instance, if the outcome you get doesn't match the process, one is good and one is bad, this is a good indication that perhaps you don't have things linked up in the way you'd like to. We need to continue to work on validating measures so we're certain we're measuring what we think we're measuring: We can't take for granted that we are.

There were several questions about measurement, where the measures come from, and what the baseline measure was. I really encourage everybody who wants to use the State Snapshots tool to drill down into those tables, really see what year data we are looking at and where the data come from, and take the time to go through the tables. In our office, we are very happy to refer to ourselves as data geeks, and we really like crawling through the tables in that fashion and getting a deep understanding of the types of things that can be found in there.

So the take home message from Utah is that first you have to be in touch with your policymakers to understand their priorities and interests, otherwise you'll look irrelevant and be confusing and give them information that they don't necessarily want or they aren't necessarily tracking at this particular time. It's important to really speak back and forth with your policymakers and understand what their priorities are. Second, is to fully explore the Snapshots for your State as I already said and make sure you or somebody who has a data background explores the methodology behind each measure so you can respond to questions or concerns from policymakers. One of the questions we got over and over was why we look good on this report and not on this one, but if you drill down and look at the individual measures, the same message comes out of all of them.

We do very well here in Utah in home health measures, we're not doing as well in nursing home measures, and we need to improve there. We have preventive care weaknesses and cancer screenings and prenatal care, and when you looked beyond just the summary which is useful, you saw that the same data emerged over and over. That was comforting to policymakers because the initial anxiety of why we look different here versus there was eased. We were able to say the measures were put together differently, the composites are constructed differently, and that's why we're getting a little bit of what looks like variation but the same message comes through.

I also encourage you to work with the AHRQ staff and play with AHRQ's other tools such as HCUPnet so you can get additional data that's of interest to policymakers in your State. We have found in Utah AHRQ staff can be very helpful in receiving feedback on measures and also to help us construct some additional measures that would be of use to us. With that I'm going to turn it over to the question slide and turn the presentation back over to Margie.

Margie Shofer: Thanks, Keely. We have the remaining time for Q & A.

Question: Does AHRQ have any plans to show metrics broken down by public payer versus private payer versus total State?

Ernie Moy: Well I guess that's for me. Let's say that we have an interest and we actually do have a couple of document reports that focus on different payers at the national level. It's something that we are interested in and we could probably receive some additional push to go down that pathway, with a couple caveats. When we look at different payers, payer status means something very different in the survey information that we gather from patients (What kind of insurance do you have?) than it means to another big contributor to the State Snapshots, hospital-based information, where the question is about the primary expected source of payment. We observe that the two often do not line up very well. If people could live with that caveat, that difference in definition, it's something that we could potentially explore.

Question: How are the measures selected?

Ernie Moy: That's pretty straightforward. The measures that are used in the State Snapshots are a subset of all of the measures we use in the National Healthcare Quality and Disparities Reports. They are pretty much all measures for which there is State information. The measures in the Quality Report and Disparities Report were selected through a very elaborate process which occurred about 7 years ago, when we were first asked to do the National Healthcare Quality and Disparities Reports. That process included getting recommendations for measurement of quality from multiple different organizations and an Institute of Medicine committee giving us guidance. As part of that, they took public testimony. There was a Federal Register announcement and there were some stakeholder meetings held around the country to get input about quality measures and recommendations for quality measurement. All those quality measures were then compiled into a mega-list and a Department of Health and Human Services group then went through that and honed in on those they thought were the most significant, the most scientifically sound, and the most actionable, and that constitutes the core set that we track in the Reports as a whole. Again, the State Snapshots are that subset for which there are State data.

Question: We have a question for New York for Foster. Can you explain how you use the AHRQ data to secure payment? Did it result in legislation and if so please explain the process and what the legislation required.

Foster Gesten: Sure thing. I should have made that clear. The end of the story was that the information that we had as backdrop to support defining the need and why we needed to invest in this area did result in legislation. That allowed us to add to the benefits for the Medicaid program certified diabetes educators and certified asthma educators, with an evaluation component. We are intent on being able to evaluate both the uptick and also the eventual impact of this. It may take some time, particularly in the asthma area because of the dearth of certified asthma educators. We used the data essentially to craft our argument for why this was necessary and what the value was of the interventions and how it related to important priorities for New York.

Question: I have a question for both Foster and Ernie. This is from someone in Maine who asks, "Our work in Maine shows remarkable in-State variation. Most health care is local, regional, or at a sub-State level. I'd like to see sub-State breakdowns of much of this data. Is that possible?"

Ernie Moy: Okay. Again, there is a great deal of interest and very little data or at least very limited data but we'll take that as additional encouragement to move down that particular developmental path. We agree when we look at sub-State variation, it's enormous. It swamps the State variation, and so I think that it's a very important area. It's more a matter of how much we can say at the sub-State level.

Foster Gesten: All I have to add is that I agree and we see the variations in New York just as you see in Maine. However, sometimes when we adjust for other factors the regional differences are explained by things other than the region. It's certainly helpful for us at the State level to be able to dig down once we see there's a problem and try to understand whether it's population driven, whether it's geography driven, whether it's insurance type driven, and so on. And very often the answers to all those questions are "yes," "yes," and "yes," as well as unknowns about why the variation exists.

Question: How is "no data" handled? Is it simply not included in an aggregate measure or is the aggregate measure discounted somehow for missing data?

Ernie Moy: I think that it's just considered average and therefore would not influence your score.

Question: Is there a section where you track medical errors?

Ernie Moy: Yes, actually, one of the new additions to this year's State Snapshots which I probably should have emphasized more was the addition of measures related to adverse events. You can find them under "hospital," and they relate to the Patient Safety Indicators developed by AHRQ. These are things like number of respiratory complications during surgery, numbers of post-operative sepsis, etc. If you look under the hospital section you'll see these new adverse events that we track in the State Snapshots.

Question: Is there a way to evaluate whether the changes seen between baseline and current are just noise or standard variation versus some real change?

Ernie Moy: I think there is some variability in terms of what is tracked in the Reports. The things that I would typically look for to see if a change is real or not real is to look at the individual measures and if you see a number of related measures headed in the same direction then I think that might be real. If, on the other hand, I see one measure that's driving that dramatic change then I would potentially say, maybe that's just some problem with that particular measure for that given year. I think the answer is "yes" but look at it on a case-by-case basis.

Question: Who is the right contact person for the software?

Ernie Moy: It depends on what you want to do with the software. I guess if it's a content question about the State Snapshots then I'm the appropriate contact person. If you're interested, as some States in the past have expressed, in developing a system like this and wanted to get into the nitty gritty of the XML and the programming that's involved with it, that would be our contractor, Thomson Reuters, and Rosanna is the contact.

Question: Was there a baseline description as to what the ideal State health care would be?

Ernie Moy: Again, because everything in the State Snapshots is defined in relative terms, I don't think that there is a standard absolute definition of optimal health care. Rather, what would constitute optimal health care for each measure in the State Snapshots product would be the State that performs the best for that measure, either having the highest rates of desirable care or the lowest rates of undesirable care, but it's a theoretical construct. No State even comes close to that.

Question: When a specific type of insurance is not mentioned should we assume that the data is a sample of all types of residents: Medicare, Medicaid, private, and the uninsured?

Ernie Moy: Yes, that would be the appropriate assumption. There are potentially some restrictions, Much of our survey information relates only to noninstitutionalized individuals but besides that, it should cover all payers.

Question: Are the rates reported for the various measures an aggregate representation of those who have different types of coverage and are uninsured? How is data from the NCQA [National Committee for Quality Assurance] used for the Snapshots if at all?

Ernie Moy: I think the first question was similar to the last one, so the rates represent all payers unless it's specifically stated otherwise. In terms of NCQA data, some of the measures that you'll see there are HEDIS measures coming from the HMOs and those relate to patient centers and are measures of patient provider communication but we don't use any of the clinical HEDIS measures in the reports.

Question: Is there any plan to incorporate a rate of change calculation, for example, change over year, since there are no different base years?

Ernie Moy: We have not thought about doing that, let's put it that way. We find that rate of change tends to be very variable. Just learning from the national report, there is some bouncing around over time. We're actually, this year, moving to regressing it to try to establish rates of change as opposed to just using difference between 2 years. If you are really interested in this, you can write to us and encourage us to do it and it might be something that we'll pick up. I am concerned about the variability that often exists from year to year and quantifying that as a rate of change.

Question: Do you provide insights into how States should utilize the various State health care report cards such as The Commonwealth Fund, United Health Foundation, etc? How do States decide which is best and most applicable?

Ernie Moy: Well I can put in my two cents but maybe I'll defer to my colleagues from the States to see how they use it. From my perspective, more information is better and as you know, each of the State report cards that exist have their strengths and weaknesses; there are certainly differences among them. I think when there are areas of agreement and we match up with them—and we see that there are typically more areas of agreement than disagreement—that gives the greatest motivation and greatest confidence for a State to invest in a particular area. That being said, I can understand State policymakers do not like to see conflicting data and I'm interested to see how New York or Utah deals with that issue.

Foster Gesten: This is Foster from New York. For better or for worse, I think there is a great amount of agreement between the two major report cards. I think the biggest area of disagreement we have is if we focus only on the managed care population. Our numbers look a whole lot better. When trying to jive why it looks better inside NCQA numbers versus other ones, again, the devil's in the details of exactly who is included in the numbers and what rates and what years and so on. But our experience has been that in the areas that matter in a broad way, the arrows tend to be going in the same direction. We have not had the experience that was described earlier from Utah where we had one report that said we're wonderful and another one said that we were mediocre or challenged. Both of them showed similar areas or problems. In Ernie's camp, more information and more data are generally better and you look for those areas where there's great overlap and consistency.

Question: Do you have data on eye diseases such as cataracts, glaucoma, etc.?

Ernie Moy: I think the only measure that we have is the diabetic screening, the diabetic eye exam measure. That's the only measure that we have that has State information. In the Report itself we do track a couple of measures related to vision but we don't have State information for that particular measure.

Margie Shofer: I see that a couple of people have asked if we're going to have a recording of this and the answer is "yes," and you can send an e-mail to an address that will be on the very last slide if you'd like to be notified when that's available. As you know it takes us a little bit of time to put that up. Otherwise, we'll probably be sending out an e-mail letting folks know when that's up, generally through AHRQ's .gov delivery e-mail list, the list which was used to notify some of you about this Web conference. Also, those of you who have been part of our ongoing technical assistance for the State Quality Tools project, we'll let you know through that list.

I think we are at the end of our questions and the timing is perfect, because we are at the end of the Web conference. First of all I want to thank our speakers who did a fabulous job, thanks Ernie, Foster, and Keely. And I want to thank you, the audience, for your great questions, and your participation in this Web conference. I think that we answered everybody's questions but if for some reason we didn't get to your question, we promise we'll answer it over the next few days. We hope this discussion was helpful to you, and if you do have a question about follow-on technical assistance opportunities, please do not hesitate to submit them to the AHRQ quality tools e-mail address. You see the e-mail address on this slide here. If you have any questions or comments about the tool, please send an e-mail to the same address.

Also, after the event closes, you will see an evaluation, so if you have a few minutes, please fill that out. It's really important; it really helps us know how this presentation helped you and helps us plan for future events. So, thanks again, and this concludes the Web conference and we look forward to hearing you from. Goodbye everyone.

Page last reviewed July 2009
Internet Citation: 2008 State Snapshots (continued). July 2009. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/cpi/centers/ockt/kt/webinars/snapshotstrans/stsnaptrans0709_pt2.html