Webinar Transcript - National Quality Strategy Webinar: Using Measurement for Quality Improvement
September 17, 2014
Download accessible version of slides (PDF, 2.4 MB)
National Quality Strategy Webinar: Using Quality Measurement for Improvement. September 17, 2014 [Slide 1]
Operator: Ladies and gentlemen, thank you for joining today's event — National Quality Strategy Webinar: Using Quality Measurement for Improvement. During the presentation, all participants will be in a listen-only mode. Afterwards, we will conduct a question-and-answer session. At that time, if you have a question, please press the 1 followed by the 4 on your telephone.
If you would like to ask a question during the presentation, please use the chat feature located in the lower left corner of your screen. If you need to reach an Operator at any time, please press star 0. And as a reminder, this conference is being recorded on Wednesday, September 17, 2014. I will now turn the floor over to our facilitator Heather Plochman to kick off the conference. Please go ahead.
Housekeeping [Slide 2]
Heather Plochman: Thank you. Good afternoon, and welcome to the National Quality Strategy Webinar: Using Measurement for Quality Improvement. A few housekeeping notes before we get started with our agenda: this session is being recorded, and an archive including a transcript will be available on the Working for Quality Web site in 2 weeks.
A copy of the slides was distributed with the email reminder for this Webinar, and you can also download them from the console. You will have the opportunity to ask questions at the end of the presentation, but you can always submit questions through the chat box at the lower left corner of your console.
At the end of the presentation, we will have a very short survey. It will only take about a minute to complete, and we're just getting some feedback on today's session and we would like to use that feedback to help plan more Webinar events like this in the future.
Agenda [Slide 3]
Heather Plochman: First we'll hear from Nancy Wilson, Executive Lead for the National Quality Strategy and Co-Chair of the U.S. Department of Health and Human Services Measurement Policy Council. She'll provide a brief overview of the National Quality Strategy and the Measurement Policy Council's work.
Afterwards, we'll hear from Kate Goodrich, Director of the Quality Measurement and Health Assessment Group at the Centers for Medicare and Medicaid Services and Co-Chair of the HHS Measurement Policy Council. She'll speak to CMS' quality strategy and vision for quality measurement.
Finally, we'll hear from Kevin Larsen, Medical Director for Meaningful Use at the Office of the National Coordinator for Health Information Technology, who will speak to us about health information exchange and interoperability in the learning health system. Let's go ahead and get started with our agenda. Nancy?
The National Quality Strategy: Using Measurement for Quality Improvement, Nancy Wilson, B.S.N., M.D., M.P.H. [Slide 4]
Nancy Wilson: Thanks, Heather.
National Quality Strategy: How It Works [Slide 5]
Nancy Wilson: Actually, the National Quality Strategy is really here in a nutshell, if you will. So, what we're really talking about is the fact that with multiple stakeholders involved, the importance of States, Federal and HHS, private-sector, multi-stakeholder groups all being engaged and aligned on a set of priorities and we, as a Nation, have really determined that the priorities that you see in Column 2 are those that we should spend our time on. It's really how we think about the National Quality Strategy.
But of course, once you have, "Who's involved?" Everyone. "What are the priorities that we want to focus on?" which we think is critically important. Then, how are we actually going to do something to achieve the three aims of better health, better care, and lower costs? So it's one thing to say, "Patient safety, person- and family-centered care, care coordination, leading causes of mortality and morbidity, health and well-being, and affordable care are our priority." And I think that those will stay as our priorities for a long time until we get them perfect.
It's another thing to focus on the levers and say, "Well, what do we do? What do we do to actually make improvements on the priorities?" And so we really think in terms of multiple levers: measurement, public reporting, technical assistance, accreditation, regulation, benefit design, payment, health IT innovation, workforce; we see all of those as playing very critically important roles to getting us to where we really see improvement in better care, better health, and lower costs.
Quality Can Be Measured and Improved at Multiple Levels [Slide 6]
Nancy Wilson: So today we're really going to be focused on measurement. Measurement as a lever, if you will. So we're diving in deeper, and the point of this slide—which I borrowed from Kate—is really that quality can be measured and improved at multiple levels and that there's a relationship between what we measure at the individual physician level, at the practice setting, and the community. And that these things can roll up and roll down.
So we can be thinking about patient-centric, outcomes-orientated measures at all three levels. We can think about the six priorities at all three levels. And we can think about how these things provide a snapshot of performance at the various levels that we're thinking about.
Now, despite all that, there's also work that needs to be done in harmonizing what the measures are that we already have because I think that we, historically, spent a fair amount of time kind of focusing on individual physicians or focusing on practice settings or our population health colleagues have really focused on community-level denominators. And so I think that there has not been a systematic, coherent approach to thinking about these things as we build measures in the future.
Rationale for Addressing Measure Proliferation [Slide 7]
Nancy Wilson: So, one of the things that we thought about at the HHS level—because we've heard a lot about the need for fewer measures—decreased burden of measures. And we've also really recognized the critical redundancies that happen when measures are slightly different; they sort of seem like they're measuring the same thing, but not exactly.
And at the time, this was back in 2012, there really wasn't a systematic mechanism, at least inside HHS, to align; coordinate; and approve development, maintenance, implementation, and retirement of measures across HHS programs. So we decided that we should make that happen, that we needed to really work at least within HHS to try to align and harmonize measures, recognizing that there were a lot of harmonization issues between the private and the public sector that needed to be addressed beyond what we could do, but what the space was that we could work in—our own little space—was HHS.
HHS Measurement Policy Council (MPC) [Slide 8]
Nancy Wilson: So we created the HHS Measurement Policy Council. And it has representatives from all across the HHS Agency. And we came up with a charter, and we said, "OK, well, initially it was about harmonization." And then we said, "You know what? We really think we need to be focused more broadly on all the issues around measures that need to be addressed for programs across HHS."
So we've been focusing on specific areas, developing a coordination plan for future measure development, piloting rules for categorizing measures—which we're still working on. Which measures address which of the priorities or CMS domains, if you will, for the National Quality Strategy? And these aren't issues that are easily pegged. We are working on them, grappling with them, and we would definitely want your input on them.
MPC Guiding Principles [Slide 9]
Nancy Wilson: I think more than the specific content that we've focused on are the policies and the principles that we've really agreed on and that's really—I give Kevin credit for this—measures that matter and minimize burden. I think if I had a slogan for the Measurement Policy Council: "Measures that matter and minimize burden." So, thank you, Kevin, for that.
We really want to learn and continue to develop lessons and streamline what is it that we have. We also want to be transparent with what we're doing and be consistent with what's happening with the National Quality Forum, Measure Application Partnership, etc. I mean, we don't—this isn't about reinventing the wheel, this is about aligning, harmonizing—all kind of going in the same direction.
MPC Scope of Work: Short-Term [Slide 10]
Nancy Wilson: So we've had various topics that we've focused on. And the results of this are we have this available on our Web site that you can see. When we first looked at our measure inventory of all the measures that were used across programs in HHS for hypertension, we found 51 measures of hypertension control, but we actually came to agreement on, NQF, I think it's 0018.
So now my grading from agreement to, "Let's all use a particular measure to measure hypertension control." My grading from that to, "OK, in all of our operational systems, it's actually embedded and we're doing it." It takes a couple of years, and we know that. So you may be saying, "Well, I don't see the results of that. I still feel like I have multiple requests from HHS."
But we're working on that, and it would be great to get feedback from you in terms of, "Can you see some change that's happening where things are getting a little bit more aligned from your perspective as end users of these kinds of measures?"
MPC Scope of Work: Long-Term [Slide 11]
Nancy Wilson: I think we're wrapping up here, and I want to turn this over to Kate and to Kevin. Measure alignment is really one of our critical goals, but it's also about new measure development: "What are the priorities that we have as a Nation for measure development? What do we really want to focus on and put our money toward?"
And money's tight, so we have to really be thinking in a coordinated fashion and in a consensus-based approach to, "What is it that we think is most important as far as measurement goes to drive improvement over time?" Personally, I'm stepping out of my role and saying, "I think it's patient-reported outcomes." But I will say that that's one thing that I think personally is something that we can drive a lot of innovations and disruptive innovation if we have a lot more patient-reported outcomes.
But the other, the personal opinion, the other piece that we really feel is part of the scope of our HHS lives, this group, is to think about, oh, again, where do measures live? Are they viable? Are they talked out? Do they need to be retired? How do we identify core sets of it so that if somebody's trying to kind of go along with HHS you've got the basics of what are the critically important measures that we feel people should be trying to implement and address?
Kate Goodrich, M.D., M.H.S. Director, Quality Measurement and Health Assessment Group [Slide 12]
Nancy Wilson: All right, our next slide, I think moves this on to Kate, and I get to introduce her. Kate's in charge of measurement for CMS. Kate has been a real champion, and the Nation is really indebted to her common-sense approach to measurement and measures for programs. So that's the thing that I really want to say about her. She has a great bio—Robert Wood Johnson, Yale, etc., etc. But what I want to say about Kate is that she has brought a breath of fresh air to the measurement enterprise for CMS and has a very intelligent common-sense approach to measures that are used for programs in the country. OK, Kate— now you can take it away.
Kate Goodrich: Well, thank you, Nancy, that's very nice. So, hi, everybody. It's good to be with you today.
Strategy Logic [Slide 13]
Kate Goodrich: So before I talk about measures, I think it's really important to talk a little bit about CMS' own quality strategy because the way that we have designed our quality strategy directly informed the types of measures that we've developed and use.
So we published our quality strategy in—I think it was November of 2013. We got some really good public comment on it and then refined it based on that comment, so it's available for any of you to look at on the www.cms.gov Website. The strategy is designed sort of like the pyramids you see here, where we identify the mission and vision. And I think, importantly for this call, we identified the goals for CMS that really map directly to the six priorities of the National Quality Strategy.
Underneath each goal, we have identified specific objectives such as reducing harm in the delivery of care, and so forth. Reducing readmissions, improving care for patients with multiple chronic conditions—that sort of thing. We have identified initial performance measures and targets for each goal. And then getting more and more granular, we've identified specific initiatives that CMS is undertaking to reach each of the objectives with even more specific activities underneath that.
And we really tried to design the strategy to not just be about what CMS can do to drive improvement on the goals and objectives, but what kinds of things can we do to help foster the right environment for frontline providers to also be able to improve. And so, that's an ongoing conversation that we have with all of our different stakeholders—so it's inside and outside of the Federal Government to get their feedback on what we can be doing better to create a new environment that allows for direct quality improvement in patient care.
The Strategy is to Concurrently Pursue Three Aims [Slide 14]
Kate Goodrich: So our strategy is to pursue the three aims that you already know that are part of the National Quality Strategy. So... better care for patients, better health for communities in the population addressing behavioral, social, and environmental determinants for health. And finally, affordable care. Like we like to think about it at CMS as, "Lowering costs through quality improvement."
CMS Framework for Measurement Maps to the Six National Quality Strategy Priorities [Slide 15]
Kate Goodrich: This slide highlights the fact that we have taken the six priorities of the National Quality Strategy and mapped them to the domains for measurement. So we have the one that probably most folks are familiar with—is the clinical quality of care domain. These are the measures that I think most of you were probably familiar with that are at the physician level or the hospital level—dialysis facility and so forth. They're really related to specific conditions or sub-populations.
We have measures of care coordination, population and community health, efficiency in cost reduction. And under this bucket I would put appropriate use of technology and therapeutic safety measures, and then finally person- and caregiver-centered experience and outcomes. So, I would sort of separate this into patient-experience measures like CAP, as well as patient-reported outcomes that Nancy talked about. And I'm going to talk about that a little more in a minute.
And we really believe that measures should be patient-centered so we should really be engaging and involving patients as we develop these measures. And where possible, they should be outcome oriented or at least paired with a measure of a process of care that is very directly linked to the outcome of interest.
Make Care Safer [Slide 16]
Kate Goodrich: So I want to give a couple of examples of how we have structured the strategy. So, for example under the, "make care safer goal" for CMS, we have three objectives. So number one, we want to improve support for a culture of safety within provider organizations. So a lot of them will think about this as being within a hospital setting and that is true.
There's a lot of work being done by AHRQ and others to really drive towards changing our culture so it really becomes mostly about how we eliminate harm, but I think it matters for all different settings of care—for individual clinician's offices, for rehab centers and dialysis facilities and so forth, so across the board.
Reduction of inappropriate and unnecessary care. So a lot of people think about this type of an objective really speaking more to reducing costs. And I think it is important for reducing costs, but actually more importantly, we think is that inappropriate care such as overuse of some technology actually can lead to harm. We actually know it leads to harm. So that's why we put this under the "make care safer" goal.
And then finally, preventing or minimizing harm in all settings. And by the way, there have been tremendous strides made on this particular objective over the last few years in terms of reducing, in particular, a number of different hospital-acquired conditions. So that's very exciting.
Promote Effective Prevention and Treatment [Slide 17]
Kate Goodrich: And as another example, one of our goals is to promote effective prevention and treatment, and here we had five objectives.
There's obviously a lot one can do in this space, but we needed to try to focus on particular areas that we thought were critically important because they affect a large swath of the population. Or they're areas that we know have, in the past, been a little bit more neglected. So, for example, we had increasing of appropriate use of screening in prevention services. There's a lot under the Affordable Care Act that tells us we need to do more of that. Strengthening interventions to prevent heart attacks and strokes. There's a whole campaign called "The Million Hearts Campaign" to do exactly that.
Improving the quality of care for patients with multiple chronic conditions. This has become just a really intense area of focus for us at CMS. And I would say across the department actually. CMS Medicare beneficiaries and Medicaid beneficiaries often have two or more chronic conditions, and those folks have really, I think, special needs.
Improving behavioral health access and quality of care. I think until recently, this has been one of the sort of neglected areas, so we really wanted to call out and show that we were putting some resources into that. And finally, improving perinatal outcomes. There's a lot of work going on right now within States, also within our innovation center and our Center for Medicaid, that is really working hard to improve perinatal outcomes.
CMS' Vision for Quality Measurement [Slide 18]
Kate Goodrich: OK, moving to measurement. A few words about this. So overall, our vision for quality measurement is to align measures with the National Quality Strategy in the six measure domains or priorities that I've already described. Looking at those six domains to implement measures that fill critical gaps. And believe me, if you are a measure nerd like me, you know that there are many, many gaps in measures across all the domains, including the clinical care domain.
But I would say in particular, we have some critical gaps in measures that we know are really important to patients such as patient experience measures as well as patient-reported outcomes. Where possible, and where it makes sense, we need to align our measures across like CMS programs—so the number of different physician reporting programs. For example, we need to be using the same measures for all of those programs and the good news is in 2014 we have aligned the measures, but there's a lot more that we need to do to align things like reporting periods and data sources and so forth.
We need to really be looking hard at our measures and how they're performing in the field too so that we can remove measures that are no longer appropriate where even the evidence has changed or where performance on the measure is pretty much across the board very, very high. And so, there's really no more room for improvement because this goal of measurement is to drive improvement.
And I think also critically important, is not only for us to align within CMS, but also to align with our external stakeholders. So very importantly for providers is for us to align measures with private payers, specialty societies and medical boards. And the good news is there's a lot of goodwill and engagement across the entire healthcare sector, public and private, to do exactly that. It will be hard, it will take a lot of time, but it's the first time that people are really actually sitting down and trying to figure out what are the right core sets of measures that we use across all payers.
Landscape of Quality Measurement [Slide 19]
Kate Goodrich: So if we can actually do what we have set out to do and achieve that vision, then we will address a lot of the problems with the current landscape. So it has historically been a very siloed approach, where we have different measures within each program. We have different reporting criteria for each program, so clinicians and hospitals and others have to find out what all the different criteria are for each different program.
Those of you who are frontline providers know that we do still have room to improve there. We've made some significant improvements within the last couple of years as well. There hasn't heretofore been a clear measured development strategy. And this is where the National Quality Strategy and the CMS Quality Strategy have really helped us. Most measures have been "disease specific measures," and those are necessary, but they aren't the be-all end-all.
And having these different silos has been confusing and burdensome to stakeholders, in particular providers, but also very burdensome to CMS because we had sort of had these different stovepipe IT solutions to quality measurement. So if we can get to a place of alignment, not only will it reduce burden but we'll actually all be focused in the same direction on the same areas to improve which will result in better outcomes for patients.
The Future of Quality Measurement for Improvement and Accountability [Slide 20]
Kate Goodrich: So what do we see as the future of quality measurement for both improvement but also for accountability? We need to transition meaningful quality measures away from very narrow settings to specific snapshots, and I would say that really is where we probably still are mostly right now. And part of that is because of the way the payment systems are set up. But that is something that we're thinking about as we look ahead to what the payment systems are going to look like.
We needed to reorient and align measures around patient-centered outcomes that span across settings. These types of outcome measures are—they're hard. They're really hard to develop, they're really hard to identify, and they're not always going to be available for every different specialty or every different setting. But the goal is to really try to get there. And we need the base measures on patient-centered episodes of care where as possible. Again, I think as our payment system transitions to more, sort of, global payment across different settings that would become easier to do.
And we would like to be able to capture measurement at three different levels: at the clinician level, the facility level, as well as the community level. And then finally, as I mentioned before, "Why do we measure in the first place?" Really, to help drive improvement.
CMS Activities on Patient-Reported Outcome Measures (PROMs) [Slide 21]
Kate Goodrich: And finally, just a few words about patient-reported outcome measures. I will say that this is an incredibly high priority for CMS to focus many of our resources on these types of measures.
We actually funded the National Quality Forum a couple of years ago to give us guidance on the development of patient-reported outcome measures. They brought together a wonderful group of national experts on this topic and it was very, very helpful to us. We do use a number of patient-reported outcome measures in our clinician reporting programs such as improvement in or change in depression scores and functional status, but it's a very small number at this point, not nearly not what we need.
So we are working with AHRQ, with ONC, with HRSA, and other Agencies within HHS to identify existing patient-reported outcome measures that can be rapidly incorporated into our quality reporting programs, including in the ACO program and the innovation center payment model programs. And we are currently developing patient-reported outcome measures for the hospital and outpatient settings, such as functional status after hip and knee replacement is one example.
So we're very interested in function and other topics that are disease-specific or patient-specific. But also, we're very interested in general, more crosscutting types of patient-reported outcome measures such as generally functional status, system management, quality of life—that sort of thing. And one of the things that we have done in the last year to really try to make sure that we are getting actual patient input into the development of our measures from beginning to end is that we now require in all of our measure developments that patients be involved, and this can mean having them involved in an expert panel kind of setting or through the use of focus groups. But we're working with our colleagues who develop measures with us to try to understand the best way to get patients involved in this process because it can get very technical and methodologically complex.
But we feel that it's just critical to do, so that is something that we, over the last year, have been requiring in all of our measure development contracts. So, I think there's a lot of—I'm very optimistic about where we see this going. We have a lot of challenges around the development and use of these measures, but it is a real priority, and we're very lucky to have Kevin and Nancy and other stakeholders inside and outside of the Government also working with us to overcome some of those barriers. So I will now, I think, turn it back over Nancy, who is going to introduce Kevin.
Kevin Larsen, M.D., FACP Medical Director, Meaningful Use, Office of the National Coordinator for Health Information Technology [Slide 22]
Nancy Wilson: Thanks, Kate. That's great. And I can—I mean, I—even I have been generating questions about the patient-reported outcomes, so hopefully our audience will as well. But before we go there—because some of my questions have to do with Kevin and the work that Kevin does and how we incorporate patient reported outcome into our Health Information Technology—let me introduce Kevin and have him talk a little bit.
So Kevin officially is the Medical Director of Meaningful Use at the Office of the National Coordinator for Health IT. And he is responsible for coordinating the clinical quality measures for Meaningful Use Certification. And he oversees the development of the population health tool that he's been working on. I'm going to—as I did with Kate, hopefully we'll send you out their official bios. I'm going to digress and say Kevin has been one of our passionate advocates for getting below the numerator and denominator of measures and really thinking about, "How do we standardize the building blocks of measures? And how do we make that the information that we capture so that we can build measures—multiple measures—off of those standardized building blocks?"
So I'm probably not saying that the way that Kevin would, but I will say he's been a passionate advocate for getting us to really think beyond the basics of a numerator and a denominator on measures and how to build that into technology so that it really is a decreased burden on providers and for that in particular, I'm going to—that I'm going to emphasize.
The other piece that I will say that is also something that I should have said with Kate, is that both of these people are really brilliant at keeping their eyes on the horizon of improvement. It's all about improving health care and all of the stuff that they're doing so terrifically is about trying to improve health care.
"How does it help improve?" So the improvement piece is important because you can get into the weeds of measurement and just stay there, and never make the connection between measurement as a lever for improving health care and supporting health. OK, Kevin, I'm going to turn it over to you.
Kevin Larsen: Thank you so much, Nancy and Kate. So if you could move to the next slide please.
Health Information Exchange [Slide 23]
Kevin Larsen: So ONC is the Office of the National Coordinator for Health IT. Our role in the Government is to work on the standards for health information technology and coordinate across Federal Agencies and with private partners to have people adopt standards—sometimes through policy and/or our certification program—to really be sure that information can move smoothly and seamlessly and electronically, but it's also safe and protected to keep the patient at the center, but to really allow the technology to support the patient and the care team in highly efficient, highly effective care.
"I am the expert about me.": Patient-Reported Outcomes [Slide 24]
Kevin Larsen: So you'll hear a theme from all of us that patient-reported outcomes are a key priority for HHS. This is a quote I heard from a patient that, "I'm the expert about me." And we are really doing ourselves a disservice if we don't truly leverage the work and partnership that patients bring to their own health and their own healthcare management and healthcare improvement.
I've heard it said that ..99 percent of health care happens at home and only 1 percent of health care happens within the health care organization. So we need to find ways to be sure that we are truly leveraging the expertise of patients, their engagement involvement, and their increasing interest in using technology to monitor themselves, track and make their own health decisions, and finding ways of connecting that in with the professional healthcare systems and providers that really are supporting their care.
Interoperability [Slide 25]
Kevin Larsen: So ONC works hard on interoperability, and what does this have to do with measurement? Well, a little bit more on that later, but as Nancy mentioned, one of the goals here is really to capture the data once and then reuse it many times. So that could be once at a single organization, or it could be once and that's shared with many organizations. That's the ideal state. To do that, to really be interoperable, the data has not just to be shared but it also has to be useful at the other place.
So here's an example. If we collect a patient's name and some clinical notes in one setting—let's say a hospital—and we want to now share that with another setting—let's say a long-term care provider. In Example A, that data has moved over because they used the same way to represent the information—the name field, his first name and last name—and everybody knows which one is which, and the age field is the same.
In the second example here, it's not interoperable. Although the information is shared, the second organization can't really use it. And too often that's the place that we're in now, so we're repeatedly collecting information over and over and over again each part of the health care system doing this on their own because they can't actually leverage the work and care of the other organization's information. ONC's job is to help make sure that the standards exist and the technology is certified so that this interoperability can occur.
Common Data Elements: The Future [Slide 26]
Kevin Larsen: So to the point that Nancy made about, "How do we do this in measures?" Ideally, we use some building blocks for measures. And one of those building blocks is common data elements. What do we mean by that? Well, if we all are using the same way to represent gender in every measure and every place we collect health care data, and then we encode that using standards that we know how we can share them, then no one has to re-collect that piece of data.
It's automatically been collected and has the information. It's sent from organization to organization. That information flows in appropriately and it is reused. In the world of measurement, what too often has happened is each and every measure defines these things for itself. One measure to define diabetes in one way, and another measure to define diabetes in another way, but really we all mean diabetes.
And so as we move to this common data element model, this lets us say that if I've defined diabetes for one measure in Million Hearts, that same definition of diabetes will work for another measure in Million Hearts. I don't have to recode that information, I don't have to spend lots and lots of time getting through the details to figure out how there are subtle nuances of difference in those two definitions.
HIV Cascade [Slide 27]
Kevin Larsen: So here's an example of how this has been working already in HHS. This is called the "HIV Measurement Cascade," which is a family of measures that work across health care delivery to public health. And the idea here is that essentially the numerator of each of these measures becomes a denominator of the next. So let me talk you through this a little bit.
In the first column there, it's how many people living in an area—let's say a State—are diagnosed with HIV of all of the people that have it. And then the next column is linked to care. So that's of the people that have diagnosed with HIV, so then that numerator becomes the denominator of how many of those then are linked to care. Then you measure how many of the people that have had one visit–they've been linked to care—have had repeated visits and they're retained in care.
This model allows us now to work up and down from individual organizations all the way up to population from health system to State and back and forth on a shared meeting and understanding where we each can find our own place to improve care as we're doing measurement.
HHS Measurement Alignment [Slide 28]
Kevin Larsen: The work of ONC is really to—in measurement—is really to help us with what we call a "platform migration."
Typically, we've done a lot of quality measurement using data that wasn't meant for clinical quality. We often use claims data to measure clinical quality. And there are certainly some great measures that happen through claims, but we all know that measuring the number of visits someone's had for diabetes care isn't the same as measuring how well their A1C was controlled. And so, through electronic health records and the collection of these data elements in standardized ways, we are able to now build measures that get at that clinical data. And that data was already captured as a part of usual care. That A1C value already exists.
So the role of moving to electronic health records as a primary way to capture this data is to automate this data collection to free up our chart extractors to actually go out and do quality improvement and not have them spend all of their time reviewing charts to find the data that was there to build the measures.
From organizations that have done some of this work, it can be as much as half as expensive to use an electronic health record to do measurement as it is to use that chart extraction method. And that's here in our early iterated phases of measurement. And so as Kate mentioned, there's this work towards alignment at CMS, there's also work towards alignment with other organizations in the Measurement Policy Council, and then ideally we're moving towards this unified way that we use EHRs as one of our primary platforms—our primary data sources for quality measurement.
Only Those Who Provide Care Can Improve Care [Slide 29]
Kevin Larsen: But we know that measurement isn't the end, that it's the beginning. And so, only those who provide care can improve care.
Car With No Dashboard [Slide 30]
Kevin Larsen: Next slide. So imagine that you have a car that has no dashboard. And you drive your car for a year, and at the end of the year you send some data in and 6 months later you get a report that says, "You drove too fast last year, please slow down."
That's kind of the position that a lot of health care providers are in now. Their quality feedback comes very late. It doesn't come in real time. It's really hard to act on it because a lot of it is so delayed. So another one of the benefits of moving to this electronic health record platform is we can ideally give this information both in real time and in aggregate in by the week or by the month so people can use it to make decisions and make course corrections.
Clinical Decision Support: CDS 5 Rights [Slide 31]
Kevin Larsen: That real-time feedback we call "clinical decision support." And clinical decision support allows you to get information just as you need it. So if someone needs a flu shot, it actually tells you while you're taking care of the patient, as opposed to you waiting for 6 months and getting a letter in the mail that said, "You missed giving that flu shot."
And clinical decision support has these five rights–which is the right information to the right people by the right channels in the right formats at the right time. So ONC is working along with this quality measurement to help build out tools that also let providers move and improve on the information that we care about.
popHealth [Slide 32]
Kevin Larsen: In addition to doing it one patient at a time as they're visiting us, we also are working to help providers have a way that they can see this for all the patients they're responsible for. Many people call this, "panel management" or "population health management." And the idea here would be that if you're responsible for a thousand patients in your practice, not only do you have an EHR that looks at the visits but you have a way to see, "How are all of your patients doing today or this week on a series of quality measurement?" And then how can you help troubleshoot and prioritize which kinds of outreach activities you might do for those patients? Even if they don't have a visit.
Maybe that's sending them an alert or a reminder to come in for another visit. Maybe that's reminding them to get their flu shot this fall. This is a tool—the popHealth tool—that ONC worked to create that is now free and open-source for anyone that wants to use it. And it leverages this certification technology—the certified standards—that are already in electronic health records that are certified to "Meaningful Use Stage 1 and Stage 2."
"Small Data is Our Short-term Focus." —Dr. Joe Kimura [Slide 33]
Kevin Larsen: So many of you have maybe heard of "big data." And big data is this idea that there's all this information all across the country that we can now get new scientific insights in. It's really exciting. But there's another term called "small data." And small data is, "How do we make sure that the care teams and individuals doing care have the data they need every day to be sure that they're able to take care of patients? How does the front desk clerk, who is doing all sorts of scheduling phone calls have, every morning, the list of patients that need phone calls to be followed up on? How does a nurse manager on a ward in a hospital get real live data from the patients on the current daily census that need to have certain interventions done to meet quality goals?" That's small data, and that's what a lot of organizations are working on right now.
The Learning Health System [Slide 34]
Kevin Larsen: So ONC sees these all as interconnected. As part of the learning health system, the goal is that this data is collected once. It's part of the patient care delivery. It's collected and using these standards that I mentioned, using consistent terminologies, and then it's able to be aggregated and shared used in real time but also used for public health and then ultimately that can be leveraged for clinical research.
So that's following the top part of this slide. Following the bottom part of the slide then, that's how we get a feedback loop so that the aggregated data that's clinical research eventually becomes clinical guidelines. Those clinical guidelines can then inform public policy and quality measurement. That public policy and quality measurement can ultimately support clinical decision support to be sure we're delivering the right care to the right person at the right time.
Future State: HIT-Enabled QI Toolkit [Slide 35]
Kevin Larsen: So additionally, ONC is working with CMS, AHRQ, many other partners, is working to build out a set of national infrastructure that helps make this easier for everyone to do. An example of that is the electronic quality measure that we have. Not only what we have historically done, which is what's called "human readable" and the measures are expressed in text. They also have a computer-readable version with the goal that a new quality measure can come out, and it can be automatically incorporated into an electronic health record with minimal to no programming.
We're not all the way to that place, but that is our vision and that is the work that we're busy doing—again, in partnership with many of you and many of others across the Government—feeding that system with a bunch of supporting technologies working with the vendors of your electronic health records and with registries and others. And with that, I'll turn it back over to Nancy. Thank you very much.
How to Find More Tools and Resources [Slide 36]
Heather Plochman: And this slide includes some of the tools and resources if you'd like to learn more about the topics discussed on today's Webinar. And we'll go ahead.
Questions and Answers [Slide 37]
Heather Plochman: We've seen a few questions pop through on the chat box during the Webinar, but first let's open the phone lines to see if there are any questions there, and then we'll move to the chat box.
Questions and Answers [Slide 38]
Operator: Thank you. Ladies and gentlemen, if you would like to register for a question, please press the 1 followed by the 4 on your telephone. You will hear a three-tone prompt to announce your request. If your question has been answered and you would like to withdraw your decision, please press the 1 followed by the 3. One moment please, for the first question. Our first question comes from the line of Danny Pickard. Please proceed.
Danny Pickard: Yes, good afternoon to all. My question is in regards to the connectivity between the vendors, the Government, the practitioners, and the patient themselves. Has the standard been developed to the point that anyone knows what to name their field so that when they're building their tools they're building it in the same way so that the fields are named the same? And is there a place where we can find out the status of each vendor? For example, an EMR or a portal—a "patient portal vendor"—if they have already complied with the requirements to work with each other?
Kevin Larsen: So this is Kevin Larsen. I can take that. Some of those standards exist, and many of them we're working on together to build. We started with the things that were a high priority—things like demographics, blood pressure, etc. The place that you can go to to see which vendors have complied is called the ONC CHPL C-H-P-L. That's the Certified Health Information Product List, and it lists all of the vendors that have gone through and certified that their products meet the standards that we have put out in ONC certification rules. So we have started on this path, there's a core set of data elements that are already there, but there are many more that we need to do and build out.
Danny Pickard: Thank you.
Nancy Wilson: I have a question. This is Nancy. How does what you're doing and the Web sites that you're offering to people relate to the USHIK databases and Web sites?
Kevin Larsen: So, this is Kevin. USHIK is the U.S. Health Information Knowledge Base. It's at AHRQ. It's a terrific Web site that has a lot of information around health IT, including much information about "Meaningful Use". You'll find links to most of these things based on the USHIK Web site. We work to have more than one point of entry to good information at HHS, but that ideally it links you to the right place. So, for example, USHIK will often link you to CMS, if CMS is the most appropriate place for you to get information.
Nancy Wilson: That actually wasn't—just FYI for those of you on the call—that wasn't a planted question. I actually had that question. Sorry, I probably shouldn't admit that.
Operator: Our next question on the lines comes from the line of Don Casey. Please proceed.
Don Casey: Well, hi. Can you hear me OK?
Nancy Wilson: Yes. Hi, Don.
Don Casey: Great. Hi, Nancy. Thanks for having this. It's great to see the progress. And I'm calling on behalf of the American College of Medical Quality. But really, my overarching question has to do with—and again, this is not meant to be critical—but the overarching question is, how or if and when we actually begin to consolidate our assessment of the impact—which is not a word on your slides—of these efforts.
And in particular, one can look back to the recent discussion about the hospital engagement networks. On the one hand, great job of engaging lots of people in a lot of activities, and on the other hand, not much to really look at in terms of how this whole thing is evaluated and measured internally in terms of impact. So that's one comment.
And the other is related to it. And that is that I think we're still not quite cross-linking evaluation methods on measures. So, for example, there's been some recent conversations at NQF on cost of care, and yet the measure developers and the whole process of consensus development doesn't take into account the cross-linking of the measurements of cost of care with the measurements of mortality with the measurements of readmissions with the measurements of adverse events and the measurements of evidence-based care well enough.
And so, I just think it's time for us to really look much more closely at how we take all these data and make more sense out of it, because, ultimately, what we're trying to do is not only make care better, but make it less complex, right? So I would maybe not take issue, but tweak a little bit on Slide 29. Having had another experience as a patient to say that I, myself, have improved care as a patient—admittedly I'm biased because I'm a provider. But the point is, that I think that's what you want, is to make things better and less complex. Thanks.
Kate Goodrich: So, this is Kate. Maybe I can just briefly respond to that. I think those are phenomenal comments. And one of the things that—starting with the impact of, certainly, the measures—but there's a lot to assess, impact of a lot of programs and quality improvement efforts like QIOs, like the Partnership for Patients—all of that needs to happen.
We are actually required by law to assess the impact of measures every 3 years. And we do a lot of work to try to do that. We actually collaborate with our colleagues at AHRQ on some of that work as well. And as you might imagine, that's a tough thing to do. I think there's a lot—you have to figure out sort of what the framework is for doing that, what are the right questions to ask, some of which I think are these combinations of quality and cost measures and how they work together and how they're related or not. And so one of the things we have definitely found in our work is good news; there have been trends mostly in the right direction to a degree, but it's hard to assess because the data are not all there over an extended period of time that you can use to assess.
So a limited number of measures that we use in our programs—and we use a lot of measures—have more than 3 years of data associated with them. So that will improve over time. We know that from the last assessment that we did we're able to do more. But some of the questions we're asking are exactly what you bring up is, "What is the relationship between how providers score or perform on cost measures related to other types of measures like patient experience, readmissions, mortality?" Exactly the ones that you talk about.
So I think, especially for cost measures, these are relatively early days in the development and use of those measures. And I think there's a lot of measurement science that still needs to be developed behind those measures, and there just really need to be better data sources for those measures. We have claims; that works to a degree, but we know there's a lot of hidden costs that aren't captured in any kind of a systematic way, and we do need to get better at that. So I just want to thank you for that question. I think it's very insightful.
Operator: There are no further questions for the phone lines at this time. Please proceed with the chat questions.
Heather Plochman: OK, great. Thank you. I think we have time for two quick final questions. The first is, "Does HHS see a role for data collection for mobile health devices for patient reported outcomes?"
Kevin Larsen: This is Kevin Larsen. I'll take that. We certainly do. Last year at a conference called Datapalooza, which is a Government-sponsored conference around health IT data, we actually brought mobile health developers specifically developing tools for patient reported outcomes to that conference. Many of these tools are already in use, and we are working on standards to help that be more easily used across the whole industry.
Heather Plochman: OK, and our final question. "What..."—this one is for Nancy—"when will the National Quality Strategy Annual Progress Report be published? And where can we access it?"
Nancy Wilson: It's in the Office of the Secretary. It's been approved by all the HHS Agencies and the White House, so hopefully next week it will be at the Working for Quality Web site, which I think is on your screen right now. So stay tuned for that.
We really try to feature groups that are doing promising work, terrific work in addressing each of these priority areas so that's the primary goal of those - of the annual progress report. And we would love to hear about what you're doing so that we can feature you in the next year's annual report. So, thank you, and thank you for joining us. Thank you very much.
Thanks for attending today's event. [Slide 39]
Heather Plochman: As you can see, the presentation archive will be available on the Web site that's listed on the slide within 2 weeks. And thank you very much for joining. Bye-bye.
Operator: Ladies and gentlemen, that does conclude the Webinar for today. We thank you for your participation, and I ask you that you please disconnect your lines.
Page originally created November 2016