Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

July 23, 2009 (continued)

Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs

Afternoon Session 

Rita Mangione-Smith: Alright, everybody. We along with our assistants—thank you very much assistants by the way—have tried to condense all of your great ideas, and we did a pretty good job. I think we have half as many up there, about maybe a little fewer than half. We ended up with 10 areas that people can vote on as being the most important things for us to focus on for scope. We'd like to get some group consensus around this, so you've been given five sticky notes. If you want to put all five of them on one, that's fine, or you can do two on one, three on another, however you want to split up your votes, okay?

So when you're ready, I will go through and read them all for everyone so you can kind of be thinking ahead of time before you go up there, okay?

So our 10 are: number one, effort to find good measures, and all service categories, and coverage eligibility categories that are in the legislation. So that's a mouthful but we're trying to get—this is sort of the inpatient, outpatient mental health, dental. This is the discontinuity of care, duration of enrollments, as well as all other things in the legislation, we'll make an attempt, a good effort to find measures in all of those areas. However, if no good measures exist, it is out of scope, okay?

This is Marina's idea: leaning toward more grounded measures, grounded on one end, aspirational on the other; column one—10 to 25 measures [inaudible], we don't have to stick to that number—now feasible; maybe already in place; the middle column is kind of a signal to the States that this is what's up and coming. Good specifications exist. Some States are probably already using these measures, the stretch kind of measures, and then aspirational need measures to fill in the gaps.

Number three: integrate race, ethnicity, language into measure selected for the four. Cost-effectiveness of quality improvement (QI) efforts resulting for measurement should be considered. Don't waste time on unnecessary data collection on measures without a demonstrated link to improved care or access; must be realistic about staffing, funding, and needs for pooling, analyzing, and reporting available data. Choose measures that are actionable; for example, if you have a rate on a measure that is suboptimal, what do you do to improve it? There is something you can do to improve it. Choose a narrower set linked to outcomes, so it's kind of a call for high validity.

Include measures not currently used by Medicaid/CHIP. For example, State and national measurement efforts that are already ongoing that Medicaid and CHIP could potentially link to. And then actionable is not necessary, if the measure of health care [indiscernible] the state of child health quality we should improve it [sounds like], okay?

Female Voice: Do we have to include the measures in the law? In other words, if we—can we ignore that because it seems to me that that's sort of a guiding—

Rita Mangione-Smith: So we have, I think in the report we have to be very clear that we made a huge effort to find reasonable measures that cover those areas and they don't make sense. If that's what we in fact find from the papers that Denise has commissioned [inaudible], we have a whole paper on the, I think, the enrollment issue because in the initial scan we didn't find [inaudible].

Female Voice: Yeah. I guess I'm just asking whether that should even be a criterion because it's almost as though you have to attend to them, and so to vote on a kind of—doesn't make a lot of sense because you have to somehow attend to them.

Jeffrey Schiff: I think what we said over lunch was this gets into how courageous we're going to be about an empty chair I guess is what I keep on thinking. So if we find no valid and feasible measure for something, would we rather send a report that says we didn't find one, and so hence we can't do that? And that's—I mean that was some of the discussion at lunch. So I think if you're voting for number one, a part of what number one says is we would potentially accept the fact that we couldn't find a measure for something that was in the legislation. Hence, how rules-based you are?

Female Voice:—vote on it. [Group cross-talking]

Female Voice: No, you don't have to do it anyway.

Female Voice: I think there is a different—what you have to do is, I think we're framing differently.

Female Voice: You say why and you don't.

Female Voice: I disagree that you can't—that you must include something or go with something simply because it's in the legislation—

Female Voice: No—you're interpreting—wait a minute. I think we're using the term what you have to do differently. That's not what I meant. I meant if somebody is already doing due diligence to looking at whether that is really something that you can measure, and it's kind of like why should we weigh in on it if somebody's already looking at that in terms of whether you have any real measures.

Female Voice: But they are looking at it to see if there's any measures they can give us to then say—and what we're saying is that, yes, we will obviously consider all of those measures that are put in front of us, but if no measure is put in front of us then we're going to be okay with that.

Female Voice: Yeah. I guess I can't articulate what I'm really trying to say about why we should really not sort of pay attention to one, but that's okay. [Cross-talking]

Female Voice: You don't have to vote for it.

Jeffrey Schiff: I'm going to take this minute to just do some logistics on things that we need to get done. Denise has a few. I also wanted to—somebody approached me over lunch and asked me to please introduce in the audience the senior people from other Federal agencies. So if you are a senior person from another Federal agency and want to introduce yourself, that would be great.

Deborah Willis-Fillinger: Hi. I'm Deborah Willis-Fillinger and I'm with HRSA, Health Resources and Services Administration, Center for Quality. Thank you.

Jeffrey Schiff: Thanks.

Blake Caldwell: I'm Blake Caldwell here from the Centers for Disease Control and Prevention (CDC).

Jeffrey Schiff: Okay, before you start, Denise, I just want to also ask, we don't have a formal evaluation for this meeting. What I'm going to ask you, since the only part of the evaluation I always like to read anyway are the comments, I will ask that everybody turn in a sheet that really helps us so we can plan for the next meeting, and it should include—just answer the two questions: what worked well for this meeting, and what can we improve for the next meeting? And if you could score them in Delphi for feasibility—

I think it will help because we do have another meeting coming. We have this interval process to consider as well, to make sure we do that well so in an effort for quality improvement in our process that'll be great.

Denise Dougherty: A couple of items. Tomorrow morning is the National Advisory Council (NAC) meeting at which Jeff and Rita will be reporting on the deliberations of this group and hopefully some actual definitive outcomes. That starts at—the discussion starts at 10:15. It's a public meeting. It actually starts at 9:00. It's in the Humphrey Building, which is the HHS headquarters which is at 200 Independence Avenue.

The next thing, I think, is if you can't go to the NAC meeting, you're wondering if this is a public meeting, what's going to happen to the stuff? The plan is for us to write up a quick summary of what's reported to the NAC tomorrow and get that out to you and on our Web site as soon as possible. After that, that gentleman back there in the yellow shirt has been recording the meeting and taking notes so there will be—we can put the transcript on our Web site, and we will probably do that rather than spend the time to turn it into a really polished document. But I think the report tomorrow is what people will be mostly interested in. So that's what the plan is for the discussion. Thanks.

Jeffrey Schiff: Okay. So folks are reviewing the materials. I think as soon as Rita is done with these tallies, we'll just look at those, and then we'll go on and talk about the importance for individual criteria.

Rita Mangione-Smith: Okay. So we have our results on our scope. So the number one—the one that received the most votes was leaning toward more grounded measures, now feasible, maybe already in place. Number two is, must be realistic about staffing, funding needs for pooling, analyzing, and reporting available data. The feasibility seems to be high in people's minds here. Three is back over here—effort to find good measures in all these different areas including what's in the legislation will be made, however, if no good measures exist, we're going to consider that area out of scope.

Number four—include measures not currently used by Medicaid/CHIP. Number five: choose measures that are actionable. And number six: don't waste time on unnecessary data collection on measures without a demonstrated link to improved care or access. And then the rest are as they are. So I kind of just gave you the top six. And the ones that are in the top three had lots and lots of votes. So the number one had 23 votes; number two had 17 votes; and number three had 16 votes. So I think we are—these are the top three and quite a bit of [inaudible]. Any comments on that before we move on? I think these are just asking our role and the idea for doing this was as we move into talking about importance to really have our scope in mind.

Jeffrey Schiff: So we would have taken the top six and break it out there [inaudible]?

Rita Mangione-Smith: I think it's for reference—here's what we all really seem to agree on as important, and there are some other issues that were raised but did not have quite as much [inaudible].

Jeffrey Schiff: [Inaudible]

Rita Mangione-Smith: For four, five, and six—absolutely. So I gave you one, two, and three. So four had 12 votes, five had 10 votes, and six had 4 votes.

Female Voice: [Inaudible]

Rita Mangione-Smith: Yeah. It's kind of the top five [inaudible] that's a big slip. There was a big drop off. So what Jeff and I wanted to do, much like we did yesterday, was to really get us to all focus on what our criteria are going to be when we gauge the importance of the various measures that made it through the validity and the feasibility filters, okay?

Jeffrey Schiff: It gets to this—trying to do this first step.

Rita Mangione-Smith: Yes, but this is ultimately the list. We will be applying these criteria to you today. Every measure that comes in front of this group for Delphi in validity and feasibility, if a measure passes those two filters, we will eventually be applying the importance criteria that we're about to design, okay. So a couple—Denise was very kind in pulling together sort of a summary of some thoughts that we have about importance criteria, but we really want to throw this out to you to discuss and also get your ideas for additional or different criteria.

Jeffrey Schiff: So we are going to go over the importance criteria for selection of measures, addendum to the Delphi guidance on validity and feasibility. And what I want to point out is there are really two aspects to importance that were brought up here. The first aspect was—I know this is where we want to spend a little bit of time discussing here—the extent of the quality problem are variation and quality. And then, the second one was which in some ways we have talked about quite a bit here, in other ways maybe not. But the other one is relative prevalence, incidence, and severity of the health condition among children. So we wanted to just confirm or add or embellish this list for importance before we actually start talking about ranking these, so we'll open this up for discussion. Marlene.

Marlene Miller: [Inaudible]

Denise Dougherty: One quick thing just to add and within variation we're considering disparities within that realm also that [inaudible] so let me show you what's in this chart. I know you've all memorized it by now but—in your spare time—quality gaps and variations in quality. It's five measures that we have, and it's information from the National Survey of Children's Health for some measures on, for example, duration of enrollment. So children—publicly insured children have an array of gaps in coverage that is 2.25 times that of privately insured children.

Denise Dougherty: Now the HEDIS [Health Plan Employer Data and Information Set] data that I have, and Sarah said she has more, didn't have racial and ethnic disparities but it has disparities by region and also says what the—So there is, for example, percentage of low birth weight infants from the National Health Care Quality Report and Disparities Report. We don't have the data for Medicaid because it's vital statistics, and you don't have that, but we do have State variation, so the worst State here is Louisiana, the best States are Alabama, Oregon, and Washington. And we have racial disparities. Blacks are greater than mothers in other racial groups, and non-Hispanic blacks are greater than other race/ethnicity groups. So it's that kind of information measure by measure so we can refer, as we go through the list, refer to this and see if there is still a big gap that we need to fill here. The question of actionable is a thing that we didn't address but people can address here.

Female Voice: Jeff, could I ask one question for clarification? You said you—before we got started on ranking needs, did you really mean rank, or do you mean rate them?

Female Voice: We're going to rate them—

Jeffrey Schiff: Right, I'm sorry—

Female Voice:—so keeping in mind what we're thinking of our three categories—definitely important 7 to 9, uncertain level of importance 4 to 6, definitely not important 1 to 3.

Jeffrey Schiff: So the same way we did it—

Female Voice: The same way you did it for validity and feasibility. And we will allow measures that are 4 or higher [inaudible] as a preliminary set, not to say that we [inaudible] at a later date get cut because we see other measures in the next couple of waves [inaudible].

Female Voice: Can I—it just strikes me, even looking back at the validity and feasibility ranking and now the importance, it seems very disjointed for me to consider this without all the measures that we've even identified in the last 2 days because I ranked them relative to each other. And so this is the subset that I would rank differently if I see some of the other measures that we talked about, the measures outside of current Medicaid and CHIP on the same table.

Female Voice: Right. So just so that—and I'm right there with you. I would always look at the whole set relative to each other, but given that we don't know what that whole set is, these were measures in the initial Delphi that made the cut for validity and feasibility. We wanted the importance on this. We'll go through the exact same process with new measures that are proposed between now and the end of—August 24.

Marlene Miller: Will we ever get them all—

Female Voice: We will.

Marlene Miller:—in one batch to vote on?

Female Voice: Yes, at the September meeting. So you guys on the phone line are going to have to do validity, feasibility, and importance of the new measures, and then we all get together in September, and we have a big, long list that's more than the 10 or 25 [inaudible]

Jeffrey Schiff: So what I would say, Marlene, to be more specific is, if something drops off as important—unimportant this time, you probably you won't see it again, but if it makes it to this list, we'll still—now that we've sort of adopted Marina's list, we still may have to segregate what goes where, and when we get to this, the nitty-gritty about how we represent the score set based on these sort of values or principles or whatever it is, the scope things we just talked about, we'll still have to segregate that out. So don't feel like you're—you're not voting by comparison right now—

Marlene Miller: I understand.

Jeffrey Schiff:—and in some ways this is forced function so we can talk about what we've—I mean we could—I think this is important for us to have for tomorrow as well.

Marlene Miller: Then I guess the one piece I would add on importance after what we've just ranked up there is actionable has to be considered. I mean you have—

Female Voice: So you want that to be in the importance criteria?

Marlene Miller: Yes, we've—you talked about variation and quality and the prevalence, but we just voted as number five is that it's an actionable measure, that there's something you can do to improve it.

Female Voice: So that's what this whole discussion is for. Then let's get down what we think need to be the criteria you were thinking of when you gave it that Delphi score on importance.

Female Voice: Can I ask a question? Is there an opportunity to [inaudible]?

Jeffrey Schiff: Yes.

Female Voice: Okay. So people who've been using the term actionable.

Male Voice: In my mind it's, is there something—if I were a State, and you said I had to track this measure, could I actually do it, is there a proven effort I could bring in to my State or a proven effort that's ongoing nationally that I could join into that actually had shown that the results and outcomes of improved outcomes for kids and the extra bonus would be that it also reduced costs? So I just—like low birth weight, we've measured for years and years and years, there are some things that could be done, but it's not as actionable to me as some of the other issues because it's so socially determined outside.

Female Voice: Right. So it's kind of the sub-example that we put, if you had to rate on a measure that's suboptimal, what do you or can you do to improve on it. Yes?

Denise Dougherty: But there are levels of evidence for improvability in that kind of thing. So, you know, I just want us to be aware that just because somebody heard of some State or local hospital doing something good doesn't mean that everybody else will be able to do that improvement [inaudible].

Female Voice: I think that's true, but I think Paul's example is great. I mean, if you think of low birth weight, we all know there are so many other determinants outside that little pediatrician's office that influence that. That even if you had two pieces of weaker evidence on what's actionable, that's going to be very different than some of the other things that we can talk about.

Female Voice: So there's—I mean if you accept the assumption that prenatal care has something to do with preventing low birth weight, and you see the variation in prenatal care received from Medicaid, the Medicaid population, you would say—I mean, I think another way to think about something as actionable, even though it goes against what I just said about the evidence [inaudible], is if there's variation and some States are really doing it well, we may not know why yet, but the fact that others are doing it well means we can learn from that. Is that—?

Female Voice: Maybe, maybe. I bring up measurement issues and all that kind of [cross-talking]

Female Voice: It's a little tricky, too, [inaudible] a lot of what we're looking at [inaudible] these processes are usually actionable. Outcomes tell you, you need to fix the processes, so I think that's part of the [inaudible]

Male Voice: I think it is reasonable to agree to the belief that if I tried that in my State it would result—you're right, I mean we—you have to wrestle that whole issue of the law of evidence, but I think we allowed that in when we were talking earlier that—but it is this idea of what would really be something that would be—

Denise Dougherty: [Inaudible] request is, if somebody says something is actionable to tell us how it's actionable so we can tell the Secretary this can be improved, and here's what can be done about it. Is that—?

Male Voice: I think just in context, those also, in the feasibility part, because you're going to—let's just take the analogy that we're talking about with low birth weight, we don't know what the cause of the factor is but even to get the data, you're going to have to do something at the practice level where prenatal care, I can do that at the administrative plan level, so I think there's a level of feasibility that you have to sort of think about as you're trying to put it where it belongs, if you will.

Female Voice: There are two levels of feasibility. One is can you get the measure, can you get the data? And the other one is—

Male Voice: Can you really measure it?

Female Voice:—this is actionable, but can I do it in my State?

Male Voice: Yeah, that's true.

Female Voice: So we're going to go to Jim.

James Crall: [Inaudible] try to sort through a little bit, I think like Marlene because even the description of the measure here is sometimes a little bit at variance for the wording we had under feasibility and validity, so I'm just sort of trying to understand it. But, all right, so this says that there's—to be on this list that a measure has made it through the validity?

Female Voice: So, 7 to 9 on validity and 4 to 6 on the feasibility [inaudible] criteria, and that means it was not discussed yesterday.

James Crall: Yeah, not discussed yesterday.

Jeffrey Schiff: Okay. And then just to remind everybody if it was discussed yesterday, we're going to revote it during the—

Female Voice: Round two for Delphi.

Jeffrey Schiff:—during the interim. And then if it passes that time, it'll be re-voted for—it will be first time voted for importance.

Female Voice: So I just wanted to check to see if I understood correctly that you said if we talk about these today, and they're voted not important, we'll never see them again?

Female Voice: So you'll all be doing a private Delphi, it's not by consensus, and if the scores come out less than 4, they would be dropped.

Female Voice: So my question is, for example, on the several mental health measures that are on here. There's a whole paper being written to advise us and give us background on these mental health measures, and we're not even going to be able to see that paper, and yet we have to vote on them now?

Female Voice: But we're not—but that paper was about the measures, not about how many kids have mental—I think this is about is there a big gap in access [inaudible] for mental health problems and under a margin of [inaudible]. A sufficient number of children have mental health problems to warrant making sure that this is [inaudible]. I think we need to [inaudible] validity [cross-talking]

Female Voice: And see—yes.

Female Voice: We need to really not think about validity and feasibility today, that was yesterday. This is really about importance. I mean, do you feel that—if you feel this is an area that's important, then you should give it a nine.

Male Voice: [Inaudible].

Female Voice: Don't think about the [inaudible]. So what we're saying is, if this proved through the Delphi process, vote it, and they think that these are valid, feasible measures, they meaning us—

Female Voice: But we've had no opportunity to discuss them, and some of us know a lot about these measures that might influence other people.

Male Voice: And even though we know that other—both measures are actually being done [inaudible].

Jeffrey Schiff: So this is how people—people scored it that way. I think—I guess there's a couple of things to say so—

Female Voice: I would put out a proposal. If the group will prefer, we can have you re-Delphi the whole set, but the problem is we don't have time to go through measure by measure and discuss every single measure, so we had to pick ones based on your first vote that were controversial. Okay? So I understand that ideally, we would've had 4 days to do this. We can discuss every single matter, but we just—the reality of our time [inaudible] that we had to do it this way.

Male Voice: Right. And, Doreen, to your point, if there are other measures that identify—they will come back in, and we have to decide on the scope and the size of our core set, even if people rank some of these as important, they—

Female Voice: They may later fall off.

Male Voice: They may later fall off, I guess is what I'm saying.

Female Voice: But is it important as a condition? In other words, children's mental health is important. Or is it that measuring continuation and maintenance of attention-deficit/hyperactivity disorder (ADHD) care is important?

Female Voice: So I'd like to—I'm just going to process check here. We're going to go through each one of these individual things and give people the chance to comment on whether they think they're important. But first, we have to get a criteria report, okay? So if we could ask you to not focus on the sheet right now and just focus on what do we want to be our criteria for importance. And then we'll—we'll continue to do the detail through all of this.

Jeffrey Schiff: Okay. We have a few patient folks here. Xavier?

Xavier Sevilla: A lot of people mentioned this yesterday and I think it should be up there—cost.

Jeffrey Schiff: Yes.

Female Voice: Cost of the service or cost of the data measurement?

Female Voice: [Cross-talk] Cost of the [inaudible], okay.

Jeffrey Schiff: All right, Phyllis?

Phyllis Sloyer: I was going to add cost as well, but I want to bring up a point with respect to low birth weight, and when I'm going to consider other population-based measures, and it does get to importance criteria. That's not something that necessarily is sort of at a practice level per se. There are so many other variables, other social determinants of health. But I don't want to lose those. It's not for this exercise, but at some point and I—I applaud New York, for what they've done. They have almost a companion set that begins to look at overall health outcomes. And you can, through dataset matching, look at the global low birth weight and begin to look at Medicaid and CHIP.

I think we need to get there, but I'm not sure it's important enough as a population-based measure for this activity, but I don't want to lose sight of it. It's the same thing with infant mortality, with injuries, childhood injuries. I mean those kinds of things do become important. One should begin to roll your outcomes up to a population level. Just a thought in terms of future things.

Female Voice: Is there a criterion that you think captures that?

Phyllis Sloyer: It's almost something you wouldn't put in this because it's population-based and not necessarily directly—in other words, the process, the input and throughput does not have a one-to-one correlation on the overall outcome.

Female Voice: The National Committee for Quality Assurance (NCQA) has a bunch of criteria under importance. And accountability by the health care system is one of them. And so I think I hear you suggesting that, I'm not sure that that's right so—

Female Voice: Yes, the accountability of the health care system?

Jeffrey Schiff: Aren't you—isn't what you're saying, Phyllis, a little bit of the inverse of what Xavier and Paul were saying? Is that do we need to, at some point, look at accountability for child health? This is a very public health thing to say, which is great, but maybe—but it may not be actionable—

Female Voice: For this—For this exercise—

Male Voice: Right.

Female Voice: But you shouldn't have those two layers in the overall quality of framework. It's just that I think some of these measures, [inaudible] systems level [inaudible].

Female Voice: Just one second, do you think that we can keep track of where we are at—?

Jeffrey Schiff: It doesn't matter.

Female Voice: Okay, just a mistake.

Jeffrey Schiff: It's too late now.

Female Voice: In the materials that got sent out early, you had as criteria for important kind of two linked variables that might be called the epidemiology of the quality problem. One is the extent to which it just is a problem and the extent to which there's huge versus not so big variation across the States in that problem. And those struck me as two perfectly reasonable criteria for importance.

Female Voice: So extent to which the health care issue, health care condition—[inaudible]?

Female Voice: Well, this was the quality. I mean, what was here was the extent of the—of the quality of care problem. In other words, are 10 percent of the Medicaid systems not seeing a problem or 90 percent of them? That—you know, that's kind of the absolute—how bad is the problem? And then there was the variation across, I guess, State programs or however you want to think about it because that suggests not only—there might be some actionable steps to take out there, but there are benchmarks that folks could use for thinking about improving. So I would have included both of those. So it seemed reasonable to me from that draft material.

Female Voice: So extent of the quality of care problem and variation and performance? Would that mostly capture it?

Male Voice: I genuinely—I don't know if this belongs here or in a subsequent narrowing process, but I'd just keep going back to the notion that we're trying to slim this down quite substantially. So what I want to toss out is the theme "representative of a class of problem." And maybe that has to come in later, but it seems to me that a lot of our measures are narrow and the quest—and a lot of our problems are broad. And one of the things we ought to be looking for is things that if we see them—I don't know if they're sentinel, I don't know if they're representative, I'm not sure what the term is. And frankly, I think it's going to be hard to do this when we're still expanding because much of this is a relative judgment, and I don't know how to make the relative judgment if there are still things coming in, but I just want to put that out there as a factor.

Return to Contents
Proceed to Next Section

Page last reviewed October 2009
Internet Citation: July 23, 2009 (continued): Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs. October 2009. Agency for Healthcare Research and Quality, Rockville, MD. https://archive.ahrq.gov/policymakers/chipra/chipraarch/snac072209/sesstranscrr.html

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care