Section 7: Measuring Value in a Care Management Program
Demonstrating the value of care management programs is essential, both to ensure that they are providing value to Medicaid beneficiaries and to garner support from the State legislature and other stakeholders. The term "value" can be interpreted broadly, encompassing ideas such as improved health outcomes for members, efficient use of services, provider adherence to evidence-based standards of care, and slowed spending growth.
For any State implementing a care management program, developing a measurement strategy is essential to demonstrating value. A successful measurement strategy allows a State to:
- Evaluate whether the program is successful.
- Identify areas for improvement.
- Fulfill contractual parameters.
- Build support for the program.
Incorporating information from 13 State Medicaid care management programs in the initial AHRQ Learning Network and supporting literature, this section of the Guide, Measuring Value in Care Management Programs, provides information to State Medicaid staff about:
- Measurement strategy design.
- Examples of measures.
- Measurement strategy implementation.
- Communicating results to stakeholders.
Measurement Strategy Design
A measurement strategy evaluates whether a care management program has met its goals by using a set of measures with expected outcomes. When designing a measurement strategy, considering the program goals and how program interventions will lead to these goals is helpful. The conceptual model in Exhibit 7.1 demonstrates how interventions lead to desirable outcomes.
Organizational policies and interventions work together to impact provider and member behavior (Step 1). If the interventions are effective, they should lead to high-quality clinical care and effective patient self-care (Steps 2A and 2B), which will yield desirable health and economic outcomes (Step 3). States can measure program successes at each step:
Step 1. Program Process. Are the program interventions and policies being implemented as planned?
Step 2. Intervention Impact. Are program interventions and policies yielding their intended results (high-quality care and effective self-care), which should lead to better outcomes?
Step 3. Health Outcomes. Is the program resulting in meaningful changes in health and economic outcomes?
Using this conceptual model, a State can design a program that yields desired outcomes and then create a measurement strategy that determines whether the program results in meaningful change. In addition to considering the link between interventions and outcomes, the State also should choose measures based on the following considerations:
- Quality and usefulness of measures.
- Balance of process and outcome measures.
- Source of measure.
- Feasibility of data collection.
- Potential for improvement.
Quality and Usefulness of Measures
Measures are important for several reasons, including their appeal to stakeholders, ability to identify areas for program improvement, and capacity to determine program value in terms of cost savings, clinical improvements, or improvements in care.
Stakeholders have different interests and investments in care management programs that States must consider when choosing program measures. For example, providers might be more interested in clinical outcomes, while some legislators might be more interested in cost savings. To ensure that value is proven to all stakeholders, States should use a variety of measures that appeal to a broad stakeholder group. Washington measured functional assessment, along with clinical measures such as testing rates and hospitalizations. Pennsylvania measures the number of asthmatic patients who self-reported the use of a controller medication as well as the rate of hospital admissions for patients with asthma.
States also must consider whether their measures are accurately gauging the success of their interventions. The measures a State chooses should be appropriate to the interventions it plans to implement. For example, if an intervention centers on encouraging providers to follow standardized guidelines, then a process measure related to providers, such as whether asthma severity is noted in the chart of patients with asthma or whether patients with diabetes receive a foot examination at least once a year, might be appropriate. Alternately, if an intervention focuses on improving patient self-management, then an appropriate measure might track weight loss or whether patients have a self-management goal. When choosing measures, a State should consider the program's broader aims and the expected outcomes resulting from the program and the interventions. An appropriate measure should be able to track a specific intervention's outcomes.
Patient Behavior Indicators
States should measure indicators of patient behavior, such as medication adherence and continuity of care. These measures are particularly effective at determining the success of programs that focus on patient self-management.
Considering the strength of different data sources also is important. Self-reported measures can be essential for information on influenza vaccinations, aspirin, satisfaction, knowledge of self-care and treatment goals, and quality of life. However, self-reporting is an invalid and unreliable way to collect data on clinical indicator values, such as blood pressure numbers and HbA1c level. Therefore, States should strive to find alternatives ways to collect data on clinical indicator values such as through chart reviews or by obtaining lab results. Finally, a State should consider the types of measures it is collecting. Outcome measures are better indicators of a program's success but are often difficult to collect. Process measures are easier to collect and can be affected in a shorter period of time, but the evidence base on their overall impact on health outcomes varies. States should strive to have a balance of process and outcome measures.
Balance of Process and Outcome Measures
The measures a State chooses will depend on the program structure and the State's goals in operating the program. Although the measures for every State might differ, incorporating a variety of measures in a measurement strategy is important, because by doing so a State can identify both short- and long-term successes and failures of program design, interventions, and implementation. Exhibit 7.2 defines each of the three types of measure to consider—structure, process, and outcome—and lists its positives and negatives and an example.
Exhibit 7.2. Types of measures and examples
|Structure Infrastructure required to deliver high-quality care
- Policies and procedures.
- Program monitoring report.
- Easy to measure.
- Directly actionable by program administrators.
- Link to health outcomes often weak.
- Structure often fixed and cannot be changed.
- Qualifications of nurse care managers.
- Protocols for identifying high-risk participants.
Services that constitute recommended care
- Claims data.
- Self-reported data.
- Care management reports and logs.
- Directly actionable by program or providers.
- Impact on clinical outcomes variable.
- Depends on administrative measures.
- Evidence base for impact of process measures varies.
- Might fail to match intervention.
- Percentage of diabetic patients with retinal eye exam.
- Percentage of heart failure patients advised about salt intake.
|Outcome Measures of health and disability
- Medical records.
- Lab results.
- Self-reported data.
- Ultimate purpose of care management programs.
- Most relevant to patients and policymakers.
- Often included in vendor contracts.
- Influenced by extraneous variables.
- Time lag to change might be long.
- Difficult to collect clinical data.
- Percentage of asthmatic patients visiting emergency room.
- HbA1c levels among diabetic patients.
- Average medical costs per patient.
By using structure, process, and outcome measures, a State can ensure that it is receiving a complete picture of its program's value.
Pennsylvania's program, ACCESS Plus, is designed to improve the quality of care delivered to its Medicaid population, particularly for the ACCESS Plus (PCCM) population. To demonstrate that the State is achieving this goal, Pennsylvania agreed on a measurement strategy with its vendor that includes up to seven measures for each disease it covers. Pennsylvania's measures vary by type and include financial measures, clinical performance indicators, and use measures, such as:
- Readmission rates for patients with congestive heart failure (CHF).
- Patients with asthma who self-report the use of a controller medication.
- Patients with diabetes who receive an annual dilated retinal exam.
Pennsylvania included more than 40 measures in its vendor contract but later decided to narrow its focus to a smaller group of measures that were closely linked to interventions and were meaningful to stakeholders.
North Carolina Medicaid met with physicians to set performance measures for its asthma and diabetes care management programs. One of North Carolina's goals is to choose measures that have demonstrated quality improvement and cost impact, such as:
- Inpatient admission rates for asthma and diabetes.
- Percentage of asthma patients classified by stage of disease severity.
- Percentage of asthma patients with a written asthma management plan.
- Diabetic flow sheet in use on the medical record.
- Blood pressure test at every continuing care visit.
North Carolina included measures that could be captured only through chart audits (e.g., asthma staging, diabetic flow sheet use). The State felt that collecting these measures was important in an effort to assess provider care. North Carolina contracted with local Area Health Education Centers using a foundation grant award to conduct randomized chart audits annually.
Source of Measure
Depending on the type, scope, and focus of a care management program, the process and outcome measures a State tracks likely will be unique to the program. However, when deciding on clinical process and outcome measures, a State might choose to use measures from nationally recognized measurement sets, such as the Medicaid Healthcare Effectiveness Data and Information Set (HEDIS) or the Ambulatory Care Quality Alliance (AQA) Ambulatory Care Starter set. A State using HEDIS and AQA has the advantage of avoiding the lengthy process of gaining consensus on specific measures and can also feel confident that the measures chosen are valid and reliable.z Selecting standardized measures might lessen the burden on providers within a State, particularly if payers across programs can agree to use a set of standardized measures. Measures such as HEDIS also might be collected for a State's managed care program, allowing the State to compare the performance of its MCOs and FFS care management program. The use of national measures such as HEDIS, can also allow a State to compare its program with other the programs in other States. The Centers for Medicare & Medicaid Services' (CMS) The Guide to Quality Measures: A Compendium Volume 1, provides a compilation of nationally recognized quality measures. When accessed electronically through the CMS Web site, the measures may be sorted by target population, care setting, disease or condition, measure type, or any combination of these variables.
- Gain consensus more easily.
- Standardize measures across payers.
- Compare measures to managed care program.
States might have to modify the parameters of national measures to fit the unique characteristics of the Medicaid care management program. For example, Medicaid HEDIS measures require the eligible population to have continuous enrollment for 1 year prior to the measurement year. This requirement might be too limiting for a care management program where members may only be enrolled in the intervention for 3 to 6 months. Therefore, some States have created "HEDIS-like" measures that allow greater flexibility but are similar to the Medicaid HEDIS measures. For example, States have modified HEDIS measures by disregarding or decreasing the continuous eligibility requirement. This allows the State to capture a larger portion of its target care management population for measurement and evaluation purposes.
Although using nationally accepted measures offers benefits, these measures might be inapplicable for certain populations within the Medicaid programs. States might have to develop "homegrown" measures for certain program components. A State interested in tracking pressure ulcers in its Supplemental Security Income (SSI) population, or the number of SSI-eligible patients with a unique set of comorbid conditions, might decide to create measures that will help determine the level and quality of care being delivered to these specific groups. Arkansas created measures to track its interventions focused on reducing complications from high-risk pregnancy. The State collected data for measures such as the number of maternal-fetal consults and the number of low birth-weight infants who had intraventricular hemorrhaging after birth.
- Use for diseases without national measures.
- Customize measures to the characteristics of the Medicaid population.
- Design with local providers to gain buy-in.
Feasibility of Data Collection
States must consider the administrative burden of data collection for each type of measure. A State can collect information through administrative data, program data, clinical data, and patient surveys. Each of these data collection sources varies in ease of collection and usefulness, as shown in Exhibit 7.3.
Exhibit 7.3. Data Sources
|Administrative Data Claims
- Low cost and accessible.
- Provides process measures.
- Coding errors.
- Provides no outcome measures.
|Program Data |
Patient assessments, nurse care manager reports
- Relevant for some outcomes.
- Often self-reported and less reliable.
Medical records and lab results
- Best source for outcome measures.
- Records sometimes inaccurate.
- Low response rate.
- Self-reported data sometimes unreliable.
States often struggle with balancing the value of collecting clinical data (usually through individual chart reviews) and the associated burden on the State and the provider. States collect data through chart reviews because it is the best source for outcome measures. Chart reviews are also required for hybrid HEDIS measures. The HEDIS hybrid methodology is more robust than the typical administrative HEDIS measures because it combines administrative data available from claims with clinical data found in medical records. HEDIS 2008 contains hybrid specifications for measures such as cholesterol management, controlling high blood pressure, and comprehensive diabetes care. States can also use hybrid HEDIS measures to compare their program with MCOs that are also collecting hybrid HEDIS measures. When considering conducting chart reviews, States should ask:
- What role will providers and their office staff play in data collection?
- Will the State send auditors to collect a sample of chart information? How much will this information sample cost?
- Will the State provide tools, such as registries, to help expedite the process? How will the State encourage providers to use the tools?
- What resources within the State can ease the process for providers?
- Are any State agencies already conducting chart reviews and available as potential partners to share costs with?
North Carolina performed chart reviews to obtain information on outcomes measures, with funding in the first year provided by grants from local and national nonprofit organizations. Subsequently, the State partnered with its State Area Health Education Center to accomplish the reviews, which cost $18 per chart in 2005. North Carolina uses outcomes data to communicate cost savings to the State legislature and provide information on quality.
Potential for Improvement
States must consider whether the measures they choose have potential for improvement and within what timeframe they may expect to see improvement. Specifically, States should ask:
- Does evidence exist that the measure can be improved?
- Are the interventions in our program likely to improve the measure?
- Can the interventions impact the measure in our required timeframe?
States often find that including measures that can yield information over different lengths of time is especially important. For example, a Medicaid agency might be required to report back to the legislature on a program's progress 6 months after the program has been launched, but it is unlikely that the interventions would be able to yield clinical outcome changes in such a short period. In this instance, the Medicaid agency would be best off collecting several structure or process measures that could be used in the short term as well as monitoring outcomes measures that can yield different sorts of information in the longer term.
Examples of Measures
Exhibit 7.4 outlines examples of measures that States have incorporated into their care management programs targeting three common diseases—asthma, diabetes, and CHF. For each disease, several example measures are listed, including possible numerator and denominator sources, types of interventions that might effect change in the measure, and expected timeframe to see change.
Measurement Strategy Implementation
After choosing a set of measures, States can take several steps to ensure their measurement strategy will succeed. States should:
- Set measurement goals. States can determine the success of their programs by setting measurement goals.
- Begin collecting data early. Early data collection helps States identify and solve inevitable data collection problems before results are required.
- Work with stakeholders to develop measures. Data is an important tool for garnering stakeholder support. By involving stakeholders early, States can earn their support and trust.
Set Measurement Goals
To create a successful measurement strategy, a State must choose goals for its measures as well as choose the measures themselves. States might set finite goals, such as: "Seventy-five percent of members will receive an assessment." Or States might set goals for improvement, such as: "The number of members receiving assessments will increase by 5 percent every quarter until 90 percent of members receive an assessment." To avoid setting unrealistic goals, States should consider available benchmarks from other States and data sources such as HEDIS.
When setting improvement goals, distinguishing between absolute and relative improvement is important. For example, a difference exists between a five percentage point improvement (from 70 percent to 75 percent) and a 5 percent improvement (5 percent of 70 percent is 3.5 percent). The former represents an absolute improvement goal, the latter a relative improvement goal. This concept is especially important when a State is contracting with a care management vendor and might have financial rewards tied to performance.
Recognizing that many measures have a "ceiling," beyond which further improvement is challenging, also is important. For example, the percentage of members with asthma who received an influenza vaccine should increase every year. However, as the percent of members with asthma who received an influenza vaccine increases, it will be gradually more difficult for the vendor or program to meet its target. The State should set realistic goals for improvement and be ready to adapt these goals as the measure approaches its ceiling.
Begin Collecting Data Early
States have reported that unexpected data issues are common and that a frequent lesson learned is to allow as much time as possible to collect data. Consequently, data collection and measurement should begin before a new program is launched, if not earlier. In fact, a State can collect baseline data before the program begins, enabling it to set expectations for measures and to target populations and diseases appropriately.
By collecting data early, a State can identify problems with its data or data collection methods before results are required. For example, if a State wants to know whether the volume of calls to its call center has increased since beginning a public awareness campaign but lacks prior data, it has no way to measure improvement. Pennsylvania was able to draw its baseline data from calendar year 2004, which was extremely useful in determining whether changes seen in 2006 were part of an ongoing trend or resulted from recent program interventions.
Allow Ample Time for Data Collection To:
- Identify data problems early.
- Capture real-time data.
- Identify program improvements early.
Beginning data collection early also allows States to identify problems with a program or specific intervention early in the program's implementation. Data collected early in a program's existence can prove invaluable in helping the implementation team understand whether progress is being made per expectations. A program failing to deliver expected results has not necessarily failed; minor "mid-course corrections" might be undertaken to strengthen interventions and help a State reach its goals.
Work with Stakeholders to Develop Measures
Data can constitute an effective tool for gaining support from stakeholders, but only if they trust the data and agree with the measures. To facilitate their trust and agreement, involving stakeholders, especially providers, in the measures selection process often is useful. Stakeholders can be involved in varying levels of participation. At a high level of intensity, a provider advisory group might select the program measures. At a lower level, a provider advisory group might just review proposed measures and offer feedback. At any level of intensity, collecting feedback from stakeholders can gain buy-in for the measures and their results.
Communicating Results to Stakeholders
In addition to designing and implementing a successful measurement strategy, States should consider how they will communicate the results of their measures to stakeholders. Typically, a care management program has many stakeholders with an interest in the program's outcomes. A State should be prepared to present measurement results to each of these different stakeholders. Please go to Section 2: Engaging Stakeholders in a Care Management Program for more information on communicating with stakeholders.
States must present meaningful measurement results focusing on three to five key measures that demonstrate program success in a way that stakeholders, especially legislators, can comprehend (e.g., a non-clinician might understand the importance of reduced ER visits but not increased HbA1c screening). Reporting too many or incomprehensible measures only serves to confuse and turn off stakeholders.
States can use a different strategy for communicating with providers. Measures and their results can be used to help providers improve their practices as well as to gain provider support. Regular updates to providers and their associations on overall program success, especially process and outcome measures, can help garner support. Please go to Section 4: Selecting Care Management Interventions for more information on updates for providers.
Frequently the best data, health and financial outcome data, is unavailable early in the program. Nevertheless, States should not wait until outcome data is available to provide program updates. Instead, they should report other measurement results (e.g., process or structure measures) regularly to inform stakeholders of progress. Communicating early measurement results in the context of the State's goals can help manage stakeholder expectations and ensure that stakeholders are receiving correct and positive information about the program.
A measurement strategy is critical for determining program value and ensuring the program is as effective as possible. The most successful measurement strategies are designed in conjunction with program interventions and reflect program goals. States also must consider their available resources, stakeholder needs, and the evidence base for measures. Finally, measurement is helpful only if the results are used to improve the program and communicate program value to stakeholders.
z Llanos K, Rothstein J, Bailit M, et al. Pay-for-performance in Medicaid: an environmental scan. Publication forthcoming.
Return to Contents
Proceed to Next Section