2012 Meeting of the Subcommittee on Quality Measures for Children's Healthcare

Transcript (continued)

Transcript of a meeting of the SNAC held September 12, 2012, in Bethesda, MD.

Overview of the CHIPRA Process and Timeline

Charles Irwin: Thanks so much and now Charles will take over.

Denise Dougherty: Charles asked me to help with the interpretation of the CHIPRA timeline. For the CHIPRA legislation, that is February 2009. That is when the CHIPRA legislation, public law 111-3 was passed with a wonderful exciting title IV in the legislation, which was designed to support and direct activities that would improve health care quality and outcomes for children. And one of the things that it asked the Secretary of Health and Human Services (HHS) to do was to identify an initial core set of measures for voluntary use by Medicaid and CHIP programs and to post that for public comment by the end of the year basically. That is what Carolyn was referring to when she said a frenzy of activity occurred in about 6 months' time.

One of the good things about it was that piece of legislation for the initial core set did not ask that we develop new measures. It just said to identify measures that were evidence based and crossed a wide range of health care settings, providers, child ages, and types of services by the end of the year.

Thanks to Rita Mangione-Smith who was the co-chair of the first SNAC, we actually did that, and we had posted the evidence grades for the measures that were recommended by the SNAC and then recommended by the Secretary. That was all done. You will see that in December 2009, the initial core measurement set was posted.

But then in its wisdom Congress thought that those measures possibly would not be the best measures ever because we were identifying them from what was already available, and there had not been many resources devoted to the development of a really good set of child health quality measures in the preceding decades. It asked the department to create the Pediatric Quality Measures Program by January 2011.

It also asked us to engage a very broad range of stakeholders and suggested that we use grants and contracts for that program. AHRQ worked in partnership with CMS, and under the leadership of CMS, seven cooperative agreements were awarded to the Centers of Excellence in Pediatric Quality Measurement, and many of them introduced themselves here. Their charge was to develop, enhance, and improve measures, not only to improve the initial core set for voluntary use by Medicaid and CHIP programs, but also for other purposes, other uses by other private and public programs. They are in the throes of that. And they were informed in the Funding Opportunity Announcement [FOA] that they responded to that they would be given assignments on measure topics of importance to the Medicaid and CHIP programs and to child health. They have now about 42 measure topics that they have been assigned, and they are working on those. Some of them were actually submitted to the SNAC for their review this year, and you will be hearing about those.

In September 2011, we had the first SNAC-2 meeting, which was actually called an expert meeting and did not have all the people that you see around the SNAC table here. And that was to develop the desirable measure attributes, not to develop them, but to agree upon the desirable measure attributes for reviewing the submitted measures. In January of this year, we had a call for measures because in addition to what the Centers of Excellence were doing, we wanted the public to nominate measures that could fill some of the blanks that we were not able to fill in the initial core measure set. We got 77 entries in the call for measures to the public, and 50 of them had enough information that we thought the SNAC could review them according to those desirable measure attributes that they had agreed to earlier.

The SNAC has been extremely busy over the summer reviewing measures in several stages using a modified Delphi approach. And here we are on September 12, at the first SNAC-2 meeting that is actually going to be recommending measures, both for the improved core set and for these other public and private programs.

What Carolyn referred to earlier was that we don't stop here, unlike the initial core set where we only had about 6 months to do all this work. We actually have a couple of years more. The SNAC will be doing this work, again. You can all quit now if you want to. You had a tough year. We can improve the process I am sure. They will be doing this again in 2013 and 2014 looking at publicly nominated measures and also other measures developed or enhanced by the Centers of Excellence.

There is information in your briefing book or in the about what happens after a measure is recommended by the SNAC. And I think you are going to go into that. Then it goes to the NAC, then it goes to CMS, then it goes to the Secretary, and then it goes to the States and technical specs are added. It is quite a complicated process, even after the SNAC makes the recommendations.

Charles Gallia: There are people that you mentioned like Rita and others who have done a lot of the heavy lifting to get us where we are today in a very short period of time. They did some incredibly remarkable work to develop this initial core set of measures. And then with that initial core set, States have taken it in to the implementation phase. But I wanted to provide a little bit of context and understanding. I am going to move to the background on Medicaid and CHIP 101. Even though I know most people feel that they are familiar with Medicaid and CHIP programs, and you have probably all heard the statement that if you have seen one Medicaid program, you have seen all Medicaid programs.

Within the context of the complexity of the delivery system that we have, there are really two programs that we are talking about in the country. There is CHIP and Medicaid, and they are sometimes the same and sometimes different, depending on the State. In general, the combination of the two programs covers about 30 percent or one in three children across the country. In many States, it covers two out of three members of low-income populations. In certain areas, it is reaching 95 percent of what the income limits are within each State. It provides coverage as comprehensively through outreach efforts that have occurred over time. In terms of expenditures, children represent about 50 percent of the Medicaid beneficiaries, but people say kids are cheap and account for only 21 percent of the cost of care. And during the course of, the year before last I think it was, 45.3 million children were served, and it is increasing over time.

As the Social Security Act—there are two ways that people mean when they say Medicaid. It is really a reference to a particular section of the Social Security Act, Title XIX. Within that, this particular program has minimum expectations. The coverage begins for children under 6, below 133 percent of the Federal poverty level. For children 6 to 18, it is below 100 percent, and for pregnant women, eligibility ranges from 100 to 300 percent of the Federal poverty level.

CHIP, on the other hand, is a program that captures a band above that group in terms of income. And the ranges of eligibility fluctuate, depending on the State, between 100 and 400 percent of the Federal poverty level. This program, as I mentioned before, can be either expansion of existing Medicaid programs. There is no delineation between programs in a State. If you walk into a Medicaid program or a medical assistance program, there would not be any distinction between the door that you went into to obtain eligibility. The program staff are the same. The administrative data systems are the same. The criteria for enrollment are all consolidated.

Then there are other States that have stand-alone programs. In that particular case, they are administered differently. They have policies and rules that are driven separately. They have their own specifications in terms of eligibility. In some cases, even the applications are different or have different branding. You would not know that you are even affiliated with what might be considered a Medicaid program or part of the Social Security Act. And then there are the ones that are a little of both.

Next is the FPL [Federal poverty level] or the poverty-level variations for CHIP and Medicaid. But the one that I thought was probably most important is the variation and implementation. I will explain a little bit why I think that that is important and relevant in a second. Here there is obviously some variation in stand-alone and expansion programs. There are not that many expansion programs. There are many States that have combination programs, and there are several that have separate programs.

When you have a separate program, what it means is that you are an administrator in a State agency were if you want to obtain information from the Medicaid data, you would go to a different program office and ask their staff to supply you with information. It does not mean that as the administrator of a CHIP program you would automatically have all that information at your disposal.

In addition, the rules and reimbursement and cost kinds of equations that would go into what it is that you do to assess your own program are going to be dependent upon data that you may not have control over. That is when you have a separate system, even though the purveyor of most of the administrative data, the claims processing system—it is the MMIS, a Medicaid Management Information System. You have to request that information. It makes it a little challenging in terms of implementation. It makes it even more challenging in terms of prioritization of policies and actions that your State or your program can take. If you want to assess your own CHIP program, you are still reliant and dependent upon someone else in order to make that kind of assessment. That is part of the reason I wanted to provide this little framework on the variations in the implementation of the program.

It can also mean that what is actually produced and turned in, in terms of measurement, is going to vary too. When a State turns in data to CMS through CARTs, sometimes it can reflect a small segment of the population. Sometimes it can reflect a larger or more comprehensive look. I, as a State, when I want to know how I am doing by comparing myself to others, what I am comparing myself to is this. It raises some questions about how we are doing relative to other States because it is some States or it might be some States. And it is one of the challenges that we have in thinking through the measurements and implementation.

Not only are the programs themselves, the CHIP and Medicaid programs at a State level are different, within each State the delivery systems are different. The reason why this is relevant is because reimbursement is one of the connecting pieces of how you would monitor assess quality. In a fee-for-service environment, obviously you are getting paid one service at a time. And there is not necessarily the quality improvement infrastructure that you would find with a managed care organization or a primary care case management. Some States have very solid understanding of who is seeing which child, which day, and which one has missed the immunizations. They have some outstanding programs and registries that really do an excellent job of monitoring the qualities provided.

And managed care by itself is also kind of—sometimes those programs are commercial in combination with Medicaid and Medicare. Sometimes they are Medicaid-only programs. The character of each one of those managed care organizations also is going to be significantly different, even though I have that listed as 63 percent. But it is kind of aspirational in a sense that the organizations, as they mature, develop an organizational capacity to provide feedback to their providers and the provider networks and their own self-assessment. In one sense, the larger section of 63 percent moves us into understanding a system's way of thinking, as opposed to the fee-for-service that is on one service at a time.

The information that we put together were done through some discussions and just descriptions of ideas to make sure that people understood the landscape of implementation. As you consider the core measures that we are working with, and it is not just the ones, the 24, but as Marsha has pointed earlier, there are the 26 adult core measures that we have and are considering implementing simultaneously. There is some overlap in those subjects, but we thought it was important for you to see that it is not simply a matter of selecting a measure because it is important by itself. There is nothing to say that what we are doing is diminishing the value of the subject matter. But it is in relationship to the existing core measures. That is why I wanted to ask Karen if she could walk us through this.

Karen Llanos: I think Charles did a great job of outlining the issue of incremental changes relative to the existing 24 measures. We and the States have invested—I am sitting next to Kim, who is from the State Medicaid agency of Arizona. Over the past 2 years since we released the initial core set of 24 measures—it is a voluntary program. I am not sure if we said that before. There are no incentive payments tied to it. Some of the measures in the core set of 24 are complicated measures, and we know and have heard from States that even though it is a voluntary program, they want to participate so that they can start getting the national picture and start seeing how their performance rates are—how they can use the measures to drive quality improvement. I think even though the data that they are submitting are not perfect, I think they are certainly trying because they see the advantage of submitting data into CMS for a greater cause, I guess, for lack of a better word.

We want the measures. We have been working over the past 2 years with our technical assistance team to help States understand what the measures are and what they mean because we do not see them as a check-off box. We want these to be living, breathing measures that are value-added for a State Medicaid and CHIP agency. And I think the only way we can do that, as Marsha mentioned, is to make them tied to quality improvement so that there is an actual value.

It is not just to tell CMS on an annual basis how many well-child visits you have had because that is helpful, that is not where we want to go. We want to make sure that the State understands that so that we can understand how one State is providing health care for adolescents, and another State may not be, so we can tie them together to understand how we can improve health care overall.

We want to make sure that as you are thinking about improvements to the initial core set of 24 measures, you are making them relative to what we already have there. And we know that reporting from States has increased on an annual basis for the past 2 years. We know there is interest. We also know that building long-term, long-standing quality improvement takes time. The States that are investing and programming the initial 24 measures are kind of making a complete change with adding a whole different set of measures. That might not be the easiest thing for them to do because adding measures means money, and it takes time.

I think we have mentioned this before on previous SNAC calls, but it is a voluntary program. States submit data to CMS on an annual basis through a very cumbersome reporting template that we are hoping to streamline. But we know even just plugging in the data, not just the collection of the data at the State level, but just plugging in the data takes time as well. We just want to make sure we are laying those factors out for you.

And then I guess the last piece is for those of you who do not have the 24 measures memorized as I do; they are listed in your briefing book as well. I think we started with a list of measures that cover prevention, acute care, and chronic conditions as well, but in 2009, when the core measures were identified, we did know that there were certain gap areas. That is really where we are trying to figure out what incremental changes we can make over the next couple of years to really make sure that the improved set of core measures that we use moving forward represent a balanced portfolio of measures that are feasible for States and reflect different aspects of children's health care quality.

Denise Dougherty: This actually is an important point for the SNAC. I mentioned this before that the legislation asks for the development of evidence-based consensus measures for use by other public and private programs as well. And optimally you would align all the Medicaid and CHIP measures with the private sector and other public program measures. You have that option. If you really love a measure, but it is just too many in whatever metric you are using, you think for the Medicaid and CHIP programs, the recommendation can be that the highly recommended, highly scored measures be recommended for use by other public and private programs.

The other option is for those measures that do not get a high score or are not highly recommended by the SNAC—as I mentioned we have 2 more years. All of those measures would be returned to the submitters. They can provide additional information which would be made much easier because they will have an online template in which to do that which they did not have the first time. If there was missing information, it may not have been their fault. They just did not know that we were actually specifically looking for that information. We could not do that because of OMB rules about the Paperwork Reduction Act. They will have this online template that will give them more guidance on what information we are looking for. If they do resubmit, we can reconsider those measures in 2013 or 2014 whenever they come back. There are several options in looking at the different measures, several buckets in which to put the measures that are highly scored.

Karen Llanos: I think the way Charles and I were thinking about this is the impact of the core measure changes to CMS and to the States. And this is just to give you a sense of how performance has been. I am using the term performance a little bit loosely. As I have mentioned, we have had 2 years of reporting for these core measures.

This is a preview of this year, which we will be releasing in a Secretary's report in about 2 weeks—fingers crossed, Marsha. Forty-seven States and the District of Columbia voluntarily reported one or more measures. This year we actually had the State range a little bit higher than before. There is one measure that I think we have had consistent issues with in terms of the data source. The median has increased overall from 7 measures to 12 measures, 7 measures in the first year, and 12 measures this past year. Again, you see more States through the work that we have been doing directly with them with the technical assistance program and more clarification, our resource manual that we release every year, hoping for an easier-to-understand reporting template that we continue to improve. All of those factors combined. Plus I am sure the lessons learned of a State in collecting these measures are leading to more States taking up more measures within the core set. Forty-five States and the District of Columbia reported five or more measures. Thirty-two States and the District of Columbia reported 10 or more measures. Again, we see more States collecting more measures. No one collected fewer measures than they did a year before.

The piece, that third bullet, 32 States and the District of Columbia reported Medicaid and CHIP data for at least one measure is really to the point that Charles brought up before. In States that have separate Medicaid and CHIP programs, it is really difficult to combine the data. And we do know that some States are working to report both data in one measure set rather than have the CHIP program submit the same measure using a different data set.

We see an increase in more measures by States. And as I mentioned, there are no measures with States decreased reporting. We know that States are liking the measures more than they have before. They are understanding how to report and collect them. We can speculate that they will continue to collect more and more of the measures in the following years.

Next, the impact of changes for the initial core set for CMS. This is just to give you a sense of how the core set is tied to other activities that are required by the CHIPRA legislation. One of our biggest goals is to not just to increase the number of States reporting each of the measures, but it is also the completeness of the data, which Marsha alluded to before. We ask States to provide the CMS with one aggregated State rate. And we know States define that in different ways because there are different delivery systems. Sometimes we will get two health plans worth of data across one State. We are hoping to be able to get more complete data that represent what children's health care quality means for all of the children within that program.

We are trying to implement to the best of our ability the use of the core measures and other programs across CMS. Sometimes that means waiver programs and demonstrations, State plan amendments. We are really trying to not just have them this one reporting mechanism. We are really tying it to that broader issue of quality improvement. They are becoming part of the work that we do in an ongoing basis with States outside of just regular reporting. Those are our short-term goals.

Our longer-term goal is to really use it to drive quality improvement and to feed back that information to States. We have an annual, public report that we have to put out, which is called the Secretary's report. We started a couple of years ago with very little data to put up there publicly. And every year we are proud to say there are more and more and better data to put on there. We want to use measures to monitor and improve performance over time to demonstrate progress in key areas, not for us, but for the public and for Congress to show that the investments that they are making in identifying the initial core set of measures are really paying off and that States are using them.

And lastly, measures that support nationwide quality improvement. We have other, larger national initiatives focused on quality improvement, such as the Strong Start Initiative, which is focused on maternal and infant health. We have about two measures within the children's core set and three within the adult core set that support that program. We want those to stay, ideally. Patient safety, which is the Partnership for Patients, which ties in Federal readmissions and oral health initiatives as well. We have two oral health measures in the core set that support that program. As I mentioned, the 24 measures are these living, breathing value-added elements that we hope to be able to continue to build upon.

And as I mentioned before, they are voluntary for States. We feel like we have been building partnerships with States and building a trust that we didn't pull the rug from under them and switch out all 24 measures every year because that is not how you drive quality improvement and have trending data.

And then the second bullet here is just to show you the impact of adding or removing measures from the core set which needs to be done thoughtfully in a way that is respectful of the investment that States have made. It requires resource-intensive changes to our reporting template. We have to submit changes every year, which is difficult to do, but we will do them. They need to be reflected in the technical specifications manual that we release every year as well. We want to make sure that the documents that we release to support States in collecting the measures are accurate.

And lastly, we offer technical assistance and analytic support for all 24 measures. We just have to be mindful that that is a resource-limited program. It might not be able to absorb a large amount of new measures at a time.

Just lastly, the impact for States. As I have mentioned, they have spent 2 years understanding how to collect and report these measures. I think a few of us have mentioned that there are adult core measures to come, and while those are also voluntary, we have already gotten feedback from States saying we already invested in the 24 measures and you have 26 more measures for adults coming up. Can you give us a chance to breathe a second, even though they are voluntary? But I think they just want to make sure that things happen at a good pace.

And the second bullet really speaks to the fact that a lot of States, particularly if they have a managed care program, have multi-year quality improvement projects or performance improvement projects that they are required to do for their managed care program to some of these core measures. That makes it a little bit harder to change direction, if that is where we want to go.

I think the biggest impact for States is in managed care. Managed care contracts usually have to be specified in terms of the types of measures that you want to collect on an annual basis. The State quality strategies many times are tied to performance measurement, and so is external review of quality of contracts.

And then we mentioned the time and the resources that States use. We do not want to sound like a broken record, but this is the feedback that we get from States whenever we mention that we are working on improvements to the core set. I think we just want to make sure that we are communicating this information to you as well, so that we understand the differing factors that are in play.

Charles Gallia: The way that I think of it I guess is there was an IOM [Institute of Medicine] report that had measuring what matters. I see some heads nodding. I know it is a simplified way of looking at things. We have talked about the initial core set and then the improved core set. And I think what I prefer to say is that what we have is an improving core set as opposed to a definitive end point where we know that we have the final set. Over the course of time and the change in technology and understanding and the work of the COEs that will be embedded into that set. We know that it is going to improve over time and that is where we are pretty much today. There was a considerable amount of information that was discussed at this point. Charlie suggested that if there are questions that people have or some observations that you would like to make at this point, I would open it up to anyone.

Clint Koenig: I have heard a lot of different talk about gaps, and I think there are some illusions to burdens and implementation. I am wondering if we could perhaps hear what some of the roadblocks or gaps to successful implementation are, and what is some of the other work that needs to be done to help at least me grasp the definition of aspirational and grounded for this conversation today.

Karen Llanos: I think implementation and feasibility are some of the biggest ones. I am looking to make sure the State people agree with me, since they are the ultimate implementers of these measures. I think a measure that is ready for prime time is the term that we use a lot. If the measure does not look like it is understandable or feasible by a Medicaid or CHIP program, it probably will not be one of the measures that we can successfully have States implement. While we do not touch the technical specifications, we include them in a resource manual. We do provide guidance for States on how to understand how to collect the measure. The biggest piece and the struggles that States ask us about is that they do not have these codes available. They do not have these data, or they need to get the data from someone who is not within the Medicaid agency. Certainly, feasibility and implementation have been some of the biggest issues.

Marsha Lillie-Blanton: Let me just give one more example about feasibility. Two of the measures in the current children's core set require a linkage of birth certificate data with Medicaid records. We are absolutely fully committed to those measures. They are measures that we are using to drive one of our major quality improvement initiatives. And now we are putting in place a system to better train and develop the skills and capacity of States in linking. That is an example of a feasibility issue that when there is a measure that has to be collected and the method for collecting the measure is something that is not clearly within the skill set or will take additional work, additional resources. Even though the measure might matter, the measure might be important, then I think we have to question or look at whether there is another alternative measure that might be better.

Kim Elliott: And I think if you are looking at it from a State perspective, there are a lot of feasibility things that we have to consider as well. The data sources are our biggest concern and the reliability of data sources, the ability to get the data. A lot of the measure sets that are being considered do not have standardized data sources or collection, which really makes it challenging for States, and the cost impact when you have not only children, but you also have adults. You have long-term care. You have multiple, different programs. You need to look for the measures that are most meaningful, that have the most reliable and accurate data sources, that will actually provide an opportunity to improve outcomes.

From a State perspective, we do not want to put in our top priority the things that won't really make a difference in care and outcomes for the kids or for the adults that we are serving. We want to be able to make a difference in what we do. We look for the things that have the best data sources. If we are looking at things that we have to combine with public health data, like vital records, we may have to have the States change statutes to be able to actually have access to those data. Those are all big barriers for us.

Clint Koenig: Could you elaborate on what a best data source is for a State, and what a difficult data source would be?

Kim Elliott: Things that are really easy for States are administrative data such as encounters, claims. And in Arizona, our Medicaid and CHIP, even though they are separate programs, are in one house. The data source is not necessarily challenging to get at. Those are easy. If it is consumer survey sort of information, it's a lot more expensive, a lot more costly to our programs to be able to get at and a lot less consistent.

EHRs [electronic health records] are coming along, but they are also very challenging to get information from because they are very slow on the uptake on the provider side. And then when you get it, it is still not easy to extract data into the Medicaid programs with health insurance exchanges when that comes alive. Still, there is the barrier whether we will have access to the information or not because it is personal health information, and a lot of it is not strictly Medicaid or CHIP members in those systems. A lot of legal challenges and barriers to that collection effort, as well.

Glenn Flores: I had a couple of process questions and observations that I think would be helpful to go over.

Charles Gallia: I wanted to actually add to the question that you posed. I can produce the measures. That is not really—it is complicated and there are challenges and there are questions. I really want to know that what I am doing adds value. And I also want to know that the information that I am providing is not only important for public policy considerations, but that it resonates with providers that by producing this information I am going to be addressing a need that is identified by providers themselves. I want to know that they will be using it as much as possible, that the information that we produce as a State will help inform some of the day-to-day activities of running a practice.

And if it does not quite get there, then I have some concerns about the amount of time and energy that is being put into place because that is really where it matters. And if it does not connect on the ground like that then, and I know that for certain, then I am a little hesitant to restructure a data system to supply something on an ongoing basis unless it does click and it does have that utility. It is kind of a bit of a paradox in a sense that there is feasibility and utility simultaneously because unless you measure something, maybe you do not know you need to measure it, so it is not important. But part of it is stakeholder engagement that goes from the ground up.

Marsha Lillie-Blanton: Can I ask you to hold one more time? I am sorry. I just need to add a caveat to Charles' statement. He says feasibility is not a big issue with Oregon, but we need to say though Karen did not say it in her presentation. Oregon was the only State that submitted information on all 24 measures. His value is important. But understand he represents a very small universe. Out of 50 States and the District of Columbia, we only had one State. And last year we had one State that submitted 18 measures, and that was only one State. I want to give you that little nuance.

Glenn Flores: I want to start up by saying I am very excited about this initiative because we are charged with a tremendous mission, which is to basically figure out how we measure quality for two of the most important programs for the health care of children. That having been said, I think some of this has come up on some of our SNAC calls, what do we do at the end of the day in 3 years when we look at everything that we are looking at. Are we going to cover all the really important domains, and if not, are there domains that we should be looking at?

One process, thought, and question would be are we periodically going to take a step back, look at the measures and say do we have eight measures on catheter-associated infections and no measures on medical home, and therefore we are really not addressing that overall mission. That is, if you had to come up with a set number of measures or if you have no limits, what set of domains would you want to make sure that you capture. I would appreciate a little more insight in that.

And the other issue is the quality improvement process. I feel a number of us—I guess we are called SNACers—I do not know what we would call ourselves. We have noticed that we get a great diversity in the kind of information that comes with each proposed measure. And sometimes we get almost the textbook which can be helpful or sometimes too much. And sometimes we get gaps in some of the information we would like to see, and is there a way to roll that back to the measure developers to say it would be helpful to have these types of information. You might not have targeted the things we wanted to hear, or here is what a good measure would look like, just so that as we move along, we actually do some rapid cycle improvements so that we can do this really quickly and efficiently. I will leave it there.

Alan Spitzer: Could I just follow up on that in a second because I have a comment. I was going to make it later in the day, but maybe it is more appropriate here, given some of the discussion so far. In looking through the results of our initial efforts to rank people, but in looking at the results of the rankings that we did, it seemed very clear to me and this is a followup to what Charles mentioned a moment ago that members of this committee are coming to these measures from a variety of different perspectives. I think I am looking at it from what is the pediatrician, the nurse practitioner, the subspecialist in his or her office going to do with a measure because I think until that person has something in front of them that they can easily do, take care of, perform, provide information on that is not going to make its way in any kind of validated way to the State. States do not do quality improvement. Physicians and practitioners do it at the bedside taking care of patients. And while States can affect payment to those providers, I think they really cannot do quality improvement.

It seemed to me that in looking at the rankings that were developed by the committee that some people are approaching the measures from a State perspective. How nice would it be to have this information at a statewide level? And others of us, I think, we're looking at it from how easy or how important would it be to the practitioner at the bedside to rank this information, to use this particular measure. And that is why I think we are getting this very broad diversity of opinion about a number of the measures because I think some of us are not putting ourselves in all of the potential perspectives where these measures are going to be used.

I would be curious to know when we are ranking measures, how does AHRQ and CMS think about these things. Are they thinking about it in terms of just providing information at a statewide level, or as Charles pointed out, is there really an urgency to get this down to the provider level where I think there has to be? Unless the providers buy into these measures on a day-to-day basis and can provide accurate, validated information, then anything that is collected at the State level—I know from working with several hundred hospitals in our own company—unless you are absolutely certain about the source of the data and the provision of the data, any conclusions that you are going to draw are going to be completely erroneous and I do not want to see that happen. I am just curious about the perspective here.

Denise Dougherty: Since I am the one probably most familiar with all the machinations here, I am going to take Glenn's first. As far as how do we know when we filled all the categories with the important kinds of measures that we need. That is the reason why we have consistently tried to organize the measures for your voting by CHIPRA categories. You could always have that kind of in the back of your minds. I know it has been very difficult because we have not met in person to go through all of that, but that was our thinking. There is a chart in your briefing book in section 5, I think that organizes the measures by CHIPRA category. It organizes all the nominated measures by CHIPRA category. It also shows in the column to the left what the current set of the initial core set is, all the 24 measures by category and then it also provides the measure topic assignments to the COEs to show you what is coming in the next couple of years, as well as the adult core measure set. When you are thinking about a Medicaid program, you need to know what they are going to be responsible for collecting for adults as well. It is still a difficult charge to keep everything balanced and make sure you fill in all the gaps.

The other thing on the process is, I do not think you were able to be here a year ago, Glenn. One of the decisions, the consensus decisions, and we can always go back and revisit it, was that we were going to have these desirable measure attributes, but the SNAC was not going to be very specific about what people had to provide on each of those desirable measure attributes. That it would be more of an open-ended process to let people submit what they felt was convincing to the SNAC on those desirable measure attributes.

The other thing is for the publicly nominated measures, the first ones you looked at, the public did not have access by and large to the template unless they called us and asked for it. They did not fill in that template, which is why we provided you with a copy of what they actually submitted, which was not in the same order as what we call the CPCF [CHIPRA Pediatric Quality Measures Program Candidate Measure Submission Form]. That is why there was missing information. That is why one of the options today is to let people fill in that template next year or the following year when they will know more specifically what you all were looking for. I hope that helps.

Alan, I know this is a bureaucratic answer. But the legislation calls for—and don't everybody laugh—that every measure shall be available at the State plan and provider level, at a minimum. If everybody had electronic health records and everything was in bits and bytes, you could probably take it from the provider level and just roll it up to the plan level and then roll it up to the State level. That is not possible now. You still are faced with the fact that by the terms of the legislation, you should not recommend any measure unless it can be aggregated at all three of those levels at a minimum. There is no way that that is going to happen right now because no measure can be aggregated that way. You still have that challenge of whether, even though a State may have more difficulty trying to aggregate up to the State level for a provider-designated measure, whether that is still an important measure that should be in there. What CMS is looking for is reports from States for the report to Congress. But the States of course are looking for what is going on in quality at the plan and provider level. But just to bring in that legislative directive.

Marsha Lillie-Blanton: Could I respond to his question also? I firmly agree with you that the measures we identify have to matter and be meaningful to both providers and to the States. And if we have any measure in our current core set or any measure that we propose that is not meaningful in the sense of a measure that a provider either at the individual level or the plan level views that they can influence in some way by their practice, then I would say we have failed. That is a fairly strong statement. But I think our intent is to make sure that these are measures that we can intervene on and that others care about.

Alan Spitzer: But I think there are two parts to that process. I looked at these measures and quite frankly, if you just looked at them from a meaningful perspective, the large majority were meaningful. And especially some of them were put together very well, and people spent a lot of time proposing the measures. They were well thought out. And I think if those data could be collected, it would be very meaningful. But I think that there is a practicality issue there also at the provider level that I took into account, and I am not sure everybody else did on the committee. And that is why I am trying to look for some kind of context here in approving these measures because I think that many of these, while meaningful, are simply totally impractical at a provider level. And that is a distinction that I think needs to be kept in mind.

Naihua Duan: I would like to follow the nice point that Charles made earlier and also Marsha and several other members about making the connection that the measures we are proposing really make an impact with the providers and the stakeholders. I am kind of wondering whether Marsha and Denise can share with us whether there are thoughts or plans in AHRQ and CMS to do some evaluation or some followup monitoring on how the CHIPRA program works out in the field and what it is actually being used for and being considered.

I guess the point that Charles made was a very excellent one. We do not want this to be an academic exercise, and the measures are not just for their own sake, but they are the means to a larger goal. And we are making our best efforts to try to accomplish that. But once the program rolls out, I think there will be opportunities to find out from the field whether it is actually achieving what we are hoping for and maybe making some adjustments.

Kim Elliott: Can I just respond to what the gentleman at the end said?

Diane Rowley: I think what you brought up is very important, but I wanted to close out the discussion we were just having about what is important to providers and what is important to States. I think there will be measures that are more important to providers than there are to the States.

Now I am going to get on my other hat with health disparities. In order to assess successful reduction of health disparities, we have to have population-level data that providers do not necessarily see as important, but ways of measuring improvements in, let's say reduction in asthma or in preterm birth, that really cannot be evaluated just at the provider level. I would like to make a plug for us continuing to try to have this balance of both provider needs and State responsibilities and meeting general, Federal Healthy People 2020 goals. I think all of those things need to be considered when we are deciding which measures to use.

Feliciano Yu: I just wanted to piggyback on Alan and Diane's point that maybe perspective is important as we review these measures. Denise has said that the legislation did say State payer and provider maybe at some point when we are reviewing this, we put ourselves in that perspective that the measures are then looked at from the—would this be a good payer measure and then would this be a good State measure and a provider measure. We are in that particular perspective as we do. And would there be opportunities in the future to actually align those three perspectives. You look at the supply chain or actors within the improvement process. If the provider does well, does the payer have incentives to perform as well as the State? Maybe at some point in the future when we have more robust measures that all will want to align, but right now I think we are getting it from all different angles that we provide them with a theme. We have themes for child health, ambulatory, hospital-based, but maybe another cross-cutting theme would be the role perhaps.

Elizabeth Anderson: I had a question about how we can think about children with special health care needs and the definition that is being used because we saw in certainly a number of measures the Maternal and Child Health Bureau's child health screener as one definition that was used for a number of measures. And then there were other submitters who used the parent list, 20 specific conditions. And then there was another submitter who used a set of ICD-9 [International Classification of Diseases, Ninth Revision] codes for a series of articles, and the submitter listed the articles. And I think somebody else might have used an NIH [National Institutes of Health] definition. I found this distressing that we still have not come to a definition that we can agree to.

Denise Dougherty: One of the charges, one of the assignments to the Centers of Excellence is to develop a more usable, valid measure of children with special health care needs. I would invite you to talk to Rita Mangione-Smith during a break or some time. We cannot really get into it right now. We are working on it.

Charles Gallia: What I saw happen is that there are some concerns and expressions of position and criteria that started to emerge as how we think about this measure and the vantage point. Thank you for the perspective that you used in assessing them and the concern about understanding where there are gaps, even defining what a gap would mean and how we are going about doing that, and some of the processes that we have, to just be responsible for this. This is emerging. And we are defining some of the concepts as we go. When I said improving—what constitutes improving? Or if you say to strengthen a measure, what is a strong measure? What is a weak measure? Does it necessarily mean that it increases in precision? Is a strong measure one that covers the levels of aggregation? We have had the number of perspectives that are parts of delivery systems and then who is responsible or accountable.

But then there are also consumers that would be just as much a part of being valued in the decisionmaking that is there. Do these particular measures resonate not only with providers and States, but do they matter to consumers and patients, and how do we know that? A lot of these I have to say I do not know the answer. I do not think we know the answer. We will not for some time. We are going to have to do a little bit of Charles Lindblom's science of muddling through.

I hope you can bear with us and me as we navigate through this. I appreciate, again, the consciousness and thoughtfulness that you put into making these assessments. It really is probably the most single greatest value that you could add to what we end up with in improving this core set.

Kim Elliott: The only thing I wanted to say is what really matters in this process is that we are improving outcomes for the populations that we serve. The doctors, the physicians, the providers are one aspect of it. The outcomes that we measure from a State or population perspective are going to be reflective of the care and services that are provided by the individual providers. I think we cannot lose the focus that really what we are looking for is where do we put our resources to improving quality for the largest number of our members and larger populations. We can narrow down as we go forward. But right now, we really need to look at what are the most critical factors for the people that we are serving and the providers, the State, and CMS will all benefit from it when we look at the consumer level of outcomes.

Denise Dougherty: Just going back again to the other CHIPRA purposes that recommending measures for other public and private programs, just a reminder to put that in context that private insurance does provide coverage for little over half of children. The private sector, private insurance is the mechanism for paying for a lot of children's health care. The blue in the pie there is the private payment for children's health care spending in 2008, I think it was. This was from our MEPS. And the per capita expenditures, and that for many reasons that we are all familiar with, is higher for privately insured children. And just that there are examples of where privately insured children do not have great quality. There are many examples. This is just one of them where actually for having your vision checked by a health provider, the public sector actually did a little bit better than the private sector.

Charlie and I were actually discussing this last night at dinner. Well, why should we care about these privately insured kids? In this context, when we have these very vulnerable children we have CMS paying for all of this, and so forth. One reason might be if you recommend measures for privately insured children and other public programs is that perhaps they can then be the test cases for some of these measures just the way the Medicaid and CHIP programs are the test cases for the measures in improving core measure sets. But what we would need—the legislation says you should develop and enhance measures for these other purposes and payers and so forth and disseminate them. That is all it says. But we could actually use some help from all of you on how to do more proactive dissemination and actually get the private sector and other public programs to adopt some of these recommended measures and provide the information and so forth. That's just something to think about.

The bottom line is all kids are important, and measures should align across public and private programs.

Charles Irwin: At this point, I thought we would take a 5-minute break to stretch and then we will come back and start the real fun process. Do not leave.

Return to Contents
Proceed to Next Section

Page last reviewed March 2013
Internet Citation: 2012 Meeting of the Subcommittee on Quality Measures for Children's Healthcare: Transcript (continued). March 2013. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/policymakers/chipra/chipraarch/snac0912/snac0912-transcript2.html