Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

July 23, 2009

Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs

Morning Session  

Jeffrey Schiff: I just want to start the day by saying that President Obama said this has to get done and if you read the small print, it says we need a core set of child health quality measures.

Female Voice: Did it say how many we need?

Rita Mangione-Smith: We will get there today.

Jeffrey Schiff: Congress will be discussing that. Anyway, welcome back everyone who has made it here for day 2. My name is Jeff Schiff. I'm one of the co-chairs, and I'm the Medicaid medical director in the State of Minnesota, and my co-chair is Rita Mangione-Smith. Rita, do you want to reintroduce yourself just quickly?

Rita Mangione-Smith: Hi. I'm Rita Mangione-Smith. I am a general pediatrician at the University of Washington, Department of Pediatrics, and I'm also a researcher at the Seattle Children's Hospital Research Institute. My area of research is quality of care assessment and improvement.

Jeffrey Schiff: We have one committee member who is new today. Alan, will you introduce yourself for a minute?

Alan Weil: Sure. Good morning. I'm Alan Weil. I'm the executive director of the National Academy for State Health Policy. Sorry to have missed your discussions yesterday, I was at another meeting. I'm happy to be here.

Jeffrey Schiff: Carroll, I do not think we got you to introduce yourself yesterday.

Carroll Carlson: That is okay. I'm Carroll Carlson. I represent Medicaid Health Plans of America, and I'm director for government programs in Wisconsin for Medicaid programs.

Jeffrey Schiff: Rita and I are going to spend just a few minutes recapping what we perceived yesterday to be about, so we hope our perceptions are yours, and then we will talk a little bit about the agenda for today.

Yesterday, I think it was important for us to have the presentations by Carolyn Clancy and Cindy Mann and Barbara Dailey that really set the context for what we are doing. I think everyone in this room feels some significant weight of the responsibility of this committee. And also, I think the collaborative effort between AHRQ and the Centers for Medicare & Medicaid Services (CMS) on this is significant. We talked yesterday about this framework for trying to decide on a core set moving from validity to feasibility including reliability in there, and then moving to today, we are going to talk about the importance of this.

We wanted to get a little bit more clarity about measures in use because it was a topic that we spent some time talking about and clarifying, and I'm going to let Rita talk about that, but I think that is an important thing to clarify and then we will talk a little bit more about yesterday.

Rita Mangione-Smith: I think we talked about so many different sets of measures right now that we are dealing with that it got a little confusing for people around the table. So I just want to clarify again what our process moving forward is going to be, both with the measures you have already done the Delphi process on for the first round and the measures that were proposed up there, the measures in use not on our list, so the purple ones that were written down there.

First, I'll start with the set you already scored. Today, as we said, you will be scoring importance on the ones that have already clearly made the validity, feasibility cut from the first round of the Delphi. The ones we spent a lot of time having lively discussions about yesterday, you will also be rescoring for validity and feasibility. If those measures make the cut for validity and feasibility, we will then go on and do an important Delphi on those. But that will, of course, be online when you are all back at your home institutions.

These measures, our plan for these, we have three important requests for those who nominated those measures or told us about those measures that are in use but not on our list. There are some key pieces of information we need for each of those measures as we move forward with them. We need to know of any evidence that you are aware of that shows the link of that measure in terms of its link to outcomes, right? So kind of what we talked about yesterday in terms of the evidence base for the measures that were already on the list much like Denise and Arielle went through and tried to collect as much information as they could on those measures. We are asking you if you propose a measure and you know of evidence, to please send it their way to Denise. So that is the first thing.

If you could please provide information about who is currently using the measures since those are not measures that we are aware of being used by the States. Some of them may be; in fact, I know for a fact that PedsQL is being used by States. But for those that are in use but not being used by Medicaid, if you could let us know the organizations using them and very importantly, if you have access to the specifications for those measures, please, also send those to Denise because, as you know, yesterday, we made one of our feasibility criteria that specifications need to be available for the measures that we are going to propose. Go ahead.

Denise Dougherty: Well, I just want to say would it be helpful for me to send out a list of the ones that are on there and with a reminder to say—

Rita Mangione-Smith: Right. And with the caveat also that if you do not have access to the information that I just said, please let Arielle and Denise know, and then they can do their same kind of scan that they did for the measures on the original list. Once we have that information compiled, those measures along with the ones we talked about yesterday will come out to you for the Delphi process. So it will be the first round for those measures, but we do not want to ask you to do it as two separate activities. We thought we would bundle them all together and let you Delphi through all of them at the same time.

So are there any questions? Is that a little bit clearer? I hope so. Okay.

Jeffrey Schiff: One of the things we have talked about as far as the Delphi for what we will do during the interval is we will do it in two stages. One will be for around the validity and feasibility for the newly nominated measures as well as for the ones we just talked about yesterday. Then, the second stage will be around the importance. But we will combine the new ones and the ones we discussed yesterday for that.

Rita Mangione-Smith: Yes, and if any of the new ones need to go through a discussion like where they are on the cusp, we will do that in September when we are all together again. Okay?

Female Voice: Just a quick question, will this be done in the next couple of weeks so that all of these other measures will be ranked and stuff by our September meeting?

Rita Mangione-Smith: Yes, exactly. That is the goal, before the next meeting.

Jeffrey Schiff: Yes. As Rita was talking, I realized that we will, I'm sure, in the next few days after this meeting try to set some hard deadlines around this, so it is unfair to the staff and to everybody else here to—we will not be able to take nominees for new measures on September 16th, so I think that will be important.

I want to turn to today a little bit. We have actually, I think, a very informative morning with speakers to talk about the challenges of implementing quality in children's health. I think there is in our audience a ton of expertise, and then there are some folks who have been working on commissioned papers by ARHQ around key topics that have been identified for a variety of reasons. Unfortunately, all of these authors could talk for—give a whole day seminar I'm sure, and we would learn a lot, but we have needed to pare down their talks to a small amount of time so we have time for a discussion.

This afternoon, we are going to actually switch gears and talk about the importance. We will first start the same way we started with validity and feasibility by talking about importance—the criteria for importance. We are actually going to insert some time in here to talk about the scope of the core set because I think even though we had originally thought that that was a discussion we could have next time, as we talked about it at dinner and through yesterday, I think there is a lot of interest in discussing—this will not be only time we discuss this, but in discussing what it means to be in the core set in terms of what level things get measured at, a lot of different issues around scope.

So we will talk about that, and then we will actually spend the rest of the time in the afternoon talking about the importance for the measures that are already scored high on feasibility and validity. After that discussion, we would like to get those score sheets back so that Rita and I have something to say to the National Advisory Committee tomorrow. And then we will spend a little bit of time talking about how to present it to the National Advisory Committee and actually, probably, look to the two members of that committee who are on this subcommittee to help us with that. And then we will have public comment in the end of the day.

Female Voice: Actually, I think our public comments are [cross-talking].

Jeffrey Schiff: Oh, never mind. Sorry about that. Public comment will be this morning. So any questions about today? Okay, great. Thank you. Go ahead.

Rita Mangione-Smith: We are going to start out today with three speakers who are very knowledgeable about quality measure development and implementation. We asked them to come and speak to us as a group specifically about the challenges of implementing health care quality measures for children. We have three different speakers for today. We have, first, Helen Burstin from the National Quality Forum who will speak to us.

Helen Burstin: Good morning, everyone. It is a pleasure to be here with you today and always fun for me to come back to AHRQ meetings. Since I ran one of the centers at AHRQ for about 7 years, this is always a treat.

What I wanted to do this morning was to talk a little bit about implementation but also give you a bit of the landscape on the NQF process, what NQF endorsement means vis-à-vis some of the criteria you are already working and kind of going through, and then give you a sense of what is actually currently available within the NQF portfolio. All those measures in the NQF portfolio come back to the point Lisa Simpson made yesterday, we can certainly provide you the specifications for. In addition to that, we have all the measure submission forms. We can actually go back and pull the evidence for you as well for any of those measures. So, again, if we can be helpful here, we would be delighted.

Briefly, I just want to mention the endorsement process. How we select the measures to be NQF-endorsed and the current availability, which I think is larger than I think many people realize, of NQF-endorsed measures for children. And then a little bit more futuristic, our plan is to actually increase the availability of children's measures, and I'll talk about three ways we are going to do that.

So first, I just want to spend a moment on how we evaluate and endorse measures through the National Quality Forum. All of our measures are evaluated in the exact same way. We recently updated these evaluation criteria, but there are four basic criteria that you will see mirror very much your discussions, at least, of what I was able to hear yesterday.

Importance to measure and report is the first. Actually, now, that is a threshold criterion. If you do not pass importance to measure and report, we will not evaluate the other criteria. So it is quite important we try to get here at the level of evidence for the measures, is there a gap in performance? A lot of this has really given some issues around feasibility, is the juice worth the squeeze essentially? Is it worth collecting this data for the sake of trying to improve quality? The second is the scientific acceptability of the measurement properties. This really focuses very much on reliability and validity. The third is usability. Can the intended audiences understand and use those results for decisionmaking? And then, lastly, another issue—you guys were talking about yesterday—which is feasibility. So increasingly, as we are moving towards electronic data sources, whether electronic health records (EHRs) or a compilation of different kinds of data, can we capture these data without undue burden and do it in a way that makes sense?

We also try to, in addition to those criteria, pick the ones that are the best in class. This is becoming more and more of an issue as we get to a very large set of national data measures as I'll tell you in a moment.

We recently actually had one of our staff members go through all of our measures and pull them out by age group because that is often very deceiving if you just look at the title of the measure. Children are often included in some measures that do not have "child" or "kid" or "pediatric" in the title, so we actually currently have 515 total endorsed measures of which there are 55 measures for children or adolescents. Actually, a fair number of measures for adolescents, many of the age group cut-offs for example for any of the measures around sexual activity for example or STD screening begin at about age 12 or 13, so many of those measures are available as well.

In terms of some of the challenges here, and I'll stay more on the measurement side, and certainly, my colleagues will talk more about the implementation challenges, we have had a very difficult time with defining children. As a mom, trust me, I have no trouble identifying those small rugrats who are very loud in my house, but in terms of measurement, this is really a challenge. People variably do this. We have actually gotten to the point now where we go back to measure developers when they submit an age cut-off and say: What is the logic of this? How do you justify the age of 5 for example for some measures? How do you justify 12 for measures of sexual activity? So this is something I think as you deliberate going forward, there should be some clarity. I think it will be very useful for us to work collaboratively on because it helps us ultimately harmonize the measures and get a reasonable set for children.

The other thing, for many of the crosscutting measures that could technically be applicable to children, what we find is really there is no consideration when the measure is developed as to whether or not it is appropriate for children. Medication reconciliation, I cannot think of a single reason why it is over 65 in our current measure as an example. It is a very logical measure that could be very easily, I think, re-specified. Go back to the measure developers; make the case, this measure works for children. Let's work through a process to retool those measures we know work and are feasible to get them done. And I think there is just an earlier stage to measure development not as many of the outcome measures or composite measures that I think we are hoping to see more so going forward.

I mainly just put this together; these are the current kinds of measures for which we have child health endorsed measures, and I'll just run through each of them quickly.

On the prevention side, many of the issues you were talking about yesterday, certainly immunization status, body mass index (BMI), and tobacco use, cessation, and prevention. A fair number of measures actually are neonatal immunizations and perinatal measures as well and assortment, and I gave really just a handful here of measures for adolescents. Chlamydia screening in women actually goes to age 13, so certainly appropriate for adolescents. Many HIV-related measures that we endorsed last year are also applicable to adolescents around screening for high-risk sexual behaviors and injection drug use.

Experience of care measures, I know you have talked about these as well. We have already endorsed the pediatric care survey under clinician group CAHPS® [Consumer Assessment of Healthcare Providers & Systems], the specific CAHPS® survey for chronic care, the young adult health care survey, as well as the promoting healthy development survey.

We recently completed a perinatal care project, so we have a very large new set of 28 measures around perinatal and neonatal—particularly perinatal—care. Many of them are mom-related, but there is a significant number that are also about the baby before discharge, so things for example like prenatal screening; level of care; elective delivery prior to 39 weeks gestation; exclusive breastfeeding. Birth hospitalization was recently endorsed and then a series of neonatal measures—nosocomial bloodstream infections, neonatal intensive care unit (NICU) temperatures, screening for retinopathy, surfactant use—these kinds of issues.

On the outpatient side, there are not as many chronic disease measures that I think we would like, but part of that is because kids are healthy, thank goodness. But there is a set of measures on asthma. You guys were deliberating this yesterday—there is one currently endorsed measure for diabetes specifically measuring hemoglobin A1c. This actually came up in a project recently where people are looking at glycemic control, and the committee very much felt the evidence was not there yet to push definitively on some of these measures down to a lower age level without more consideration. There is a series of measures around attention deficit/hyperactivity disorder (ADHD), diagnosis, management, and followup; appropriate testing for children with pharyngitis, children with upper respiratory infections (URIs). Pediatric weight documented in kilograms is a new measure that came in recently through our emergency department project. We wanted to at least get consistent—really a safety measure consistency in some of the weights that come into EDs. A pregnancy test for example—this is a classic example of one of these adolescent kind of measures—for female abdominal pain in patients age 14 plus is how that measure came in. I'm not sure why 14 is necessarily the right age, but it's the one that was chosen for this measure.

On the inpatient side, there were many, many measures. And actually much of this is due to the good work of AHRQ in terms of the pediatric quality indicators. But certainly, measures around asthma, a whole series of indicators around health care-associated infections that specifically include NICUs and high-risk nurseries and a whole series of measures in the pediatric intensive care unit (PICU). I will also mention there is now a newly endorsed composite measure for pediatric safety that came out of the pediatric quality indicators. We can also stratify any of these by administrative data versus clinical data if that would be of use to the committee.

So where are we going? I'll just mention a couple of other challenges going forward. I think the overall evolution of the process is there is a drive to getting to higher levels of performance rather than always getting at what is measurable necessarily and perhaps not as important, a shift towards getting to composites. Can you come up with a list of everything that should get done and measure that as excellence or perfect care? Increasingly try to measure disparities in all we do; trying to harmonize measures across sites and providers. Promote shared accountability and measurement across patient-focused episodes and outcomes become really prominent here.

I just want to throw this up here. NQF has thought through the process of at least designating a series of measures that could be disparity-sensitive, so we do not necessarily need new measures for disparities. But can we take the measures we have and use a set of criteria to designate them as ones that should always be stratified so, for example, prevalence impact to the quality process.

And lastly, just to make the point, we are trying to think about how we can increase the availability of these measures and I'll wrap up here. Specifically, how do we retool measures appropriate for children? We have been talking with various child health stakeholders and others, the National Initiative for Children's Healthcare Quality (NICHQ), to help us think through can we look through that whole set of measures and identify which measures should be retooled which can be done fairly quickly. Change the age groups; specify them slightly differently to get at the issues important to children.

We are launching a new consensus development project across 20 conditions in the next month or so specifically focused on outcomes, and we have gone back to HHS, and they have agreed to let us add child health outcomes as a separate part of the project as well. And then, lastly, the national priorities and goals specifically—Marlene sits on this committee for us—helping us think through how we would endorse measures related to the national priorities and goals.

The national priorities and the goals—many of these are not unique to adults by any means. They include coordination of care and population health. A population health index at a community level for example is a strong interest area. We already have a fair amount on safety. Engaging families about making decisions and managing their care, guaranteeing appropriate care for life-limiting illnesses, and the last one here about eliminating waste while ensuring the delivery of appropriate care.

So this is one of my favorite quotes, particularly in an area like this, "Not everything that counts can be counted, and not everything that can be counted counts." With that, I'll stop, and I guess we will take questions at the end?

Rita Mangione-Smith: At the end. Thanks so much, Helen.

Denise Dougherty: Actually, a change of plans where we have questions. I think we will put the three presenters at the table to make it easier. There will be discussion between the subcommittee and the speakers. We have eight authors, I'm not sure where we are going to put you, but we will figure it out.

Rita Mangione-Smith: Maybe we can put like eight chairs in there or something.

Our next speaker is Nikki Highsmith from the Center for Health Care Strategies.

Nikki Highsmith: Good morning. Thanks for the opportunity to speak with you today. I'm going to shift thinking a little bit from a measurement perspective and more from a State implementation perspective.

The Center for Health Care Strategies is a nonprofit organization located in Princeton, NJ, and we have 15 years of experience working with States around the country around quality disparities and chronic care. We have worked with almost all 50 States, a majority of health plans serving the Medicaid population, and increasingly a significant number of them, high volume, high opportunity provider groups as well. We work through learning collaboratives, quality improvement collaboratives, innovation grants, and demonstration grants. We work very closely with the National Association of State Medicaid Directors (NASMD) around Medicaid leadership in terms of new directors and in terms of leaders around the country that we are grooming in the Medicaid program. Most of our funding is philanthropic. We work with all the major health care philanthropies. We got started with a major grant from the Robert Wood Johnson Foundation in the mid-90s on managed care and whether managed care can afford us the opportunity to improve quality in the population. So we have spent 15 years helping States to use measures for the purposes of accountability, transparency, and improvement.

So I'll speak more from sort of a State implementation perspective as opposed to a measurement perspective. I know that Sarah will talk a little bit more about the work that the National Committee for Quality Assurance (NCQA) is doing within Medicaid. Right now, we have partnered with them recently on some work around childhood measures and so, I'll spend a little bit of time reflecting upon interviews that we have recently completed with State leaders around the country as well.

Everybody knows all of these large statistics, but I would just maybe take a minute to sort of think about the first one, the first two. Medicaid is now covering 67 million people. Under any of the estimates in terms of health care reform, we are up to 90 or 100 million. We are spending $364 billion; give us a few years, and we will be spending more than Medicare. So Medicaid directors around the country view themselves as coverage organizations and as health insurance organizations, and they view quality of care and measuring quality as a responsibility in that leadership that they are providing. Within any State, they are probably one of the largest health care purchasers in that State. And so, we see Medicaid as leaders around the country, and the Medicaid program threw us a land of opportunity to be able to show what we can do for low income, elderly, and disabled patients in this country. We are privileged to be able to work with many of the very cutting edge and leading States around the country that are pushing the edges of quality and measures in terms of use for transparency, accountability, and improvement.

We do have a boot camp for NASMD for new Medicaid directors, and we talk about the quality continuum and where you can start and where you can advance to. So I will not spend a whole lot of time on this chart but, again, to show that Medicaid is a very active purchaser within States. They are not only measuring for the purposes of CMS reporting and purposes of holding our health plans accountable, but they are measuring and driving improvements in their States with quality improvement collaboratives and with performance improvement projects.

Increasingly so, we are finding Medicaid partnering with the private sector for the use of measures at the provider level, for public reporting and transparency, Jeff. In Minnesota, they have a large partnership with the Minnesota Community Measurement group, and we are seeing this public-private partnership occurring because, again, we are trying to create standardization in the field. So the NQF-endorsed measures become even more important as Medicaid begins to partner with the private sector.

I know I only have 8 minutes, so I'm going to try to do eight things around, again, sort of our experience of working with the States and our work recently with NCQA where we did pretty extensive interviews with the States around the country around child health measures. And, again, I know Sarah will talk a little bit more about this.

The first one is State readiness. We have been doing this for a long time, and I think the ability and the aptitude and the experience in the States is just sort of legions ahead of where we were 5 or 8 years ago. We heard from our advisory panel and from the States that they are ready for sort of additional Federal accountability. They are ready for more transparency and more accountability, and they are ready to help sort of lead this in partnership with the Centers for Medicare & Medicaid Services (CMS).

But this is based on a long history of experience. I think, in some ways, the congressional mandate is sort of built on this wealth of experience that States have had over the last decade or two, particularly reporting for managed care. So you have State infrastructure that has built up over this period of time and understanding that some of that infrastructure may be constrained in fiscal times. Again, something that Ann mentioned several times yesterday, most of this experience is in the managed care program and is in the capitated delivery system.

But we do have a good number of PCCM states out there—Oklahoma, North Carolina, Arkansas, Massachusetts—that are using Healthcare Effectiveness Data and Information Set (HEDIS) or HEDIS-like measures for quality measurement and improvement. So there is a good amount of experience in the PCCM programs to build upon. As Ann said yesterday about where we do not have a lot experiences in the fee-for-service population, and so that will be where the opportunity is to collect and disseminate measures is important.

So theme number three, State need—we heard a lot in terms of looking to the Federal government and the Federal partners around identifying and coordinating national measures and having that broad stakeholder perspective at the national level, so I think CMS and AHRQ have a huge opportunity to work very, very closely with States moving forward around, again, looking at the measure specifications, how the measures are going to be collected, what sort of validation process and reporting process is going to come out of this. That is a huge amount of work, and it is a great opportunity to be in partnership and in collaboration with the States as opposed to sort of just telling them what will happen to them moving forward. So it is a great opportunity for AHRQ and CMS.

And think a little bit about what States want out of this. Again, I think the measure set is really for kind of Federal accountability. We have $367 billion, and we do not have any national measures within Medicaid to show what we are spending our dollars for. At the same time, States are hungry for more comparative information and so to the extent—and particularly for primary care case management (PCCM) States that may not report to NCQA via HEDIS—they are hungry for understanding how they compare amongst their colleagues and peers. Think a little bit about sort of what the State perspective is and how States would like to use the dataset moving forward as well.

One other point here before moving forward. I think one of the discussions that you had yesterday was really kind of at what level of the system are we trying to measure—is it at the provider level, is it at the health plan level, or is it at the State level? I think it sounds like you will have a little bit more conversation about that today in terms of what level of accountability we are trying to ensure. I think, again, particularly for PCCM or fee-for-service programs, they have not had that ability to show transparency and accountability at the State level, and so I think that is an important consideration.

State need—and you talked so much about measures yesterday and again today, so I'm not going to spend a lot of time on this because NCQA and NQF certainly have the list of measures that are currently being used and ones for future engagement. But from our interviews with States, these are sort of the most used, in the middle, and sort of the least used. Again, that reflects very much your conversation and the documents that have been provided by AHRQ in terms of what is being used in the field.

These are priority areas that came up in our conversation with states around new measures. Again, these are not measures, they are really areas. I can go through them in more in detail if you want, but as we talk to states, this is what they said that they want and they need sort of in the future. So as we are looking to 2013 and looking for the opportunity over the next couple of years for grants, these are the areas that were of high interest to the states. These are also high-need, high-cost populations for which the measure sets as we know today do not necessarily reflect the breadth and the scope or the complexity of the Medicaid population. Particularly children with mental health and developmental needs and co-occurring physical health and behavioral health conditions are very high on States' radar screens, both in terms of efficiency and cost control, but most importantly in terms of quality and quality improvement.

A few issues in terms of State concerns, and I'll spend a little bit of time here as we think about things. Again, States are very concerned about access and outcomes but also about cost. I know you had this conversation yesterday of kind of how do you put this in sort of the priority language. I already mentioned the challenges within the fee-for-service population and to some extent the PCCM population in terms of not having the history and the infrastructure within the States to have measure development and reporting as part of the operational expertise within State levels.

You talked a lot about burden yesterday. I will not spend a lot of time on that. But this is also an opportunity, and Barbara and I talked at lunch yesterday, and I know this is sort of on your radar screen and priority list for the next couple of years. But as we get more sophisticated in how we are thinking about measures, we have an opportunity to standardize that measurement across State reporting both in terms of Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) requirements. We heard so much yesterday about the wonderful CMS-416 report, the waiver requirements, the [indiscernible] requirements, and the managed care requirements. It is a great opportunity to create that standardization within CMS so that we are really focused on what we care about in terms of outcomes and of measures that are as consistent as possible across States.

I'll spend a little bit of time here in terms of opportunities. I'm actually going to spend time more in reflecting on the conversation yesterday and on the path that we have all gone through over the last 20 years. I was actually at the Office of Management and Budget (OMB) when the first HEDIS measure set came through, and I thought, "What is this?" The first time to look at a HEDIS measure set as a person right out of graduate school sitting at OMB and trying to go through the approval process.

I was implementing and running the managed care program in Massachusetts actually when NASMD, the National Academy for State Health Policy (NASHP), and CMS got together on the first sort of performance measurement workgroup across Medicaid and the State Children's Health Insurance Program (SCHIP) trying to come up with a national set of voluntary measures. That was about 10 years ago, I think. I was in Massachusetts when the first letter came from—actually from Lee and from my boss, Chris Bowen [phonetic], at the time around voluntary reporting for GPRA (Government Performance Results Act of 1993), "Come on Sates, let's report on a few GPRA measures." We were trying to do childhood immunizations; I think our well-child visits if I can remember correctly, and again, it was a voluntary process.

Some of that—as Lee reflected this morning—has been incorporated into the CHIP measures, but a lot of it at the national level in terms of Medicaid obviously has not come to fruition yet. So you have a huge responsibility in terms of the measure set, but I would also say small is big in this area. We do not have the history of measures across delivery systems and across populations within Medicaid. There are only—Sarah will give the correct number—20 States that use HEDIS right now. Even if we pick a small number of measures across delivery systems and across populations, that is a really big deal for States. It will be a really big deal in terms of reporting and in terms of the accountability.

Jeff, we talked a little bit yesterday about how to stretch but how to keep Cindy's sort of task of being reasonable at the same time, and so I would urge consideration in terms of small is more, and that you can use a few measures in terms of stretch goals and stretch priorities as well. Thank you very much.

Return to Contents
Proceed to Next Section

Page last reviewed October 2009
Internet Citation: July 23, 2009: Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs. October 2009. Agency for Healthcare Research and Quality, Rockville, MD. https://archive.ahrq.gov/policymakers/chipra/chipraarch/snac072209/sesstranscrk.html

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care